00:00:00.001 Started by upstream project "autotest-per-patch" build number 132303 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.103 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.104 using credential 00000000-0000-0000-0000-000000000002 00:00:00.105 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.159 Fetching changes from the remote Git repository 00:00:00.161 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.206 Using shallow fetch with depth 1 00:00:00.206 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.206 > git --version # timeout=10 00:00:00.240 > git --version # 'git version 2.39.2' 00:00:00.240 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.265 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.265 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.843 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.854 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.865 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.865 > git config core.sparsecheckout # timeout=10 00:00:06.875 > git read-tree -mu HEAD # timeout=10 00:00:06.889 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.904 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.904 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.980 [Pipeline] Start of Pipeline 00:00:06.994 [Pipeline] library 00:00:06.996 Loading library shm_lib@master 00:00:06.996 Library shm_lib@master is cached. Copying from home. 00:00:07.018 [Pipeline] node 00:00:07.027 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.030 [Pipeline] { 00:00:07.040 [Pipeline] catchError 00:00:07.042 [Pipeline] { 00:00:07.056 [Pipeline] wrap 00:00:07.063 [Pipeline] { 00:00:07.070 [Pipeline] stage 00:00:07.071 [Pipeline] { (Prologue) 00:00:07.274 [Pipeline] sh 00:00:07.558 + logger -p user.info -t JENKINS-CI 00:00:07.572 [Pipeline] echo 00:00:07.573 Node: GP11 00:00:07.579 [Pipeline] sh 00:00:07.880 [Pipeline] setCustomBuildProperty 00:00:07.890 [Pipeline] echo 00:00:07.891 Cleanup processes 00:00:07.896 [Pipeline] sh 00:00:08.182 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.182 821433 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.194 [Pipeline] sh 00:00:08.475 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.475 ++ grep -v 'sudo pgrep' 00:00:08.475 ++ awk '{print $1}' 00:00:08.475 + sudo kill -9 00:00:08.475 + true 00:00:08.490 [Pipeline] cleanWs 00:00:08.501 [WS-CLEANUP] Deleting project workspace... 00:00:08.501 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.508 [WS-CLEANUP] done 00:00:08.513 [Pipeline] setCustomBuildProperty 00:00:08.528 [Pipeline] sh 00:00:08.812 + sudo git config --global --replace-all safe.directory '*' 00:00:08.921 [Pipeline] httpRequest 00:00:10.258 [Pipeline] echo 00:00:10.260 Sorcerer 10.211.164.101 is alive 00:00:10.271 [Pipeline] retry 00:00:10.273 [Pipeline] { 00:00:10.286 [Pipeline] httpRequest 00:00:10.291 HttpMethod: GET 00:00:10.291 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.293 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.302 Response Code: HTTP/1.1 200 OK 00:00:10.303 Success: Status code 200 is in the accepted range: 200,404 00:00:10.303 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.362 [Pipeline] } 00:00:11.382 [Pipeline] // retry 00:00:11.391 [Pipeline] sh 00:00:11.684 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.701 [Pipeline] httpRequest 00:00:13.509 [Pipeline] echo 00:00:13.511 Sorcerer 10.211.164.101 is alive 00:00:13.521 [Pipeline] retry 00:00:13.523 [Pipeline] { 00:00:13.537 [Pipeline] httpRequest 00:00:13.542 HttpMethod: GET 00:00:13.543 URL: http://10.211.164.101/packages/spdk_c46ddd981d9f69655d9cfd0fa085e903e0764826.tar.gz 00:00:13.543 Sending request to url: http://10.211.164.101/packages/spdk_c46ddd981d9f69655d9cfd0fa085e903e0764826.tar.gz 00:00:13.561 Response Code: HTTP/1.1 200 OK 00:00:13.561 Success: Status code 200 is in the accepted range: 200,404 00:00:13.562 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c46ddd981d9f69655d9cfd0fa085e903e0764826.tar.gz 00:01:47.461 [Pipeline] } 00:01:47.477 [Pipeline] // retry 00:01:47.484 [Pipeline] sh 00:01:47.768 + tar --no-same-owner -xf spdk_c46ddd981d9f69655d9cfd0fa085e903e0764826.tar.gz 00:01:50.313 [Pipeline] sh 00:01:50.599 + git -C spdk log --oneline -n5 00:01:50.599 c46ddd981 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:01:50.599 4bcab9fb9 correct kick for CQ full case 00:01:50.599 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:50.599 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:50.599 7bc1134d6 test/scheduler: Read PID's status file only once 00:01:50.611 [Pipeline] } 00:01:50.624 [Pipeline] // stage 00:01:50.631 [Pipeline] stage 00:01:50.633 [Pipeline] { (Prepare) 00:01:50.648 [Pipeline] writeFile 00:01:50.661 [Pipeline] sh 00:01:50.947 + logger -p user.info -t JENKINS-CI 00:01:50.961 [Pipeline] sh 00:01:51.251 + logger -p user.info -t JENKINS-CI 00:01:51.263 [Pipeline] sh 00:01:51.549 + cat autorun-spdk.conf 00:01:51.549 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.549 SPDK_TEST_NVMF=1 00:01:51.549 SPDK_TEST_NVME_CLI=1 00:01:51.549 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:51.549 SPDK_TEST_NVMF_NICS=e810 00:01:51.549 SPDK_TEST_VFIOUSER=1 00:01:51.549 SPDK_RUN_UBSAN=1 00:01:51.549 NET_TYPE=phy 00:01:51.557 RUN_NIGHTLY=0 00:01:51.561 [Pipeline] readFile 00:01:51.583 [Pipeline] withEnv 00:01:51.584 [Pipeline] { 00:01:51.597 [Pipeline] sh 00:01:51.884 + set -ex 00:01:51.884 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:51.884 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:51.884 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.884 ++ SPDK_TEST_NVMF=1 00:01:51.884 ++ SPDK_TEST_NVME_CLI=1 00:01:51.884 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:51.884 ++ SPDK_TEST_NVMF_NICS=e810 00:01:51.884 ++ SPDK_TEST_VFIOUSER=1 00:01:51.884 ++ SPDK_RUN_UBSAN=1 00:01:51.884 ++ NET_TYPE=phy 00:01:51.884 ++ RUN_NIGHTLY=0 00:01:51.884 + case $SPDK_TEST_NVMF_NICS in 00:01:51.884 + DRIVERS=ice 00:01:51.884 + [[ tcp == \r\d\m\a ]] 00:01:51.884 + [[ -n ice ]] 00:01:51.884 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:51.884 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:51.884 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:51.884 rmmod: ERROR: Module irdma is not currently loaded 00:01:51.884 rmmod: ERROR: Module i40iw is not currently loaded 00:01:51.884 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:51.884 + true 00:01:51.884 + for D in $DRIVERS 00:01:51.884 + sudo modprobe ice 00:01:51.884 + exit 0 00:01:51.894 [Pipeline] } 00:01:51.908 [Pipeline] // withEnv 00:01:51.913 [Pipeline] } 00:01:51.925 [Pipeline] // stage 00:01:51.934 [Pipeline] catchError 00:01:51.936 [Pipeline] { 00:01:51.949 [Pipeline] timeout 00:01:51.949 Timeout set to expire in 1 hr 0 min 00:01:51.951 [Pipeline] { 00:01:51.964 [Pipeline] stage 00:01:51.966 [Pipeline] { (Tests) 00:01:51.977 [Pipeline] sh 00:01:52.263 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.263 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.263 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.263 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:52.263 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:52.263 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:52.263 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:52.263 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:52.263 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:52.263 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:52.263 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:52.263 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.263 + source /etc/os-release 00:01:52.263 ++ NAME='Fedora Linux' 00:01:52.263 ++ VERSION='39 (Cloud Edition)' 00:01:52.263 ++ ID=fedora 00:01:52.263 ++ VERSION_ID=39 00:01:52.263 ++ VERSION_CODENAME= 00:01:52.263 ++ PLATFORM_ID=platform:f39 00:01:52.263 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:52.263 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:52.263 ++ LOGO=fedora-logo-icon 00:01:52.263 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:52.263 ++ HOME_URL=https://fedoraproject.org/ 00:01:52.263 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:52.263 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:52.263 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:52.263 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:52.263 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:52.263 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:52.263 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:52.263 ++ SUPPORT_END=2024-11-12 00:01:52.263 ++ VARIANT='Cloud Edition' 00:01:52.263 ++ VARIANT_ID=cloud 00:01:52.263 + uname -a 00:01:52.263 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:52.263 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:53.204 Hugepages 00:01:53.204 node hugesize free / total 00:01:53.204 node0 1048576kB 0 / 0 00:01:53.204 node0 2048kB 0 / 0 00:01:53.204 node1 1048576kB 0 / 0 00:01:53.204 node1 2048kB 0 / 0 00:01:53.204 00:01:53.204 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:53.204 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:53.204 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:53.204 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:53.204 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:53.204 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:53.204 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:53.204 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:53.204 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:53.204 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:53.204 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:53.204 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:53.204 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:53.204 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:53.204 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:53.204 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:53.204 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:53.463 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:53.463 + rm -f /tmp/spdk-ld-path 00:01:53.463 + source autorun-spdk.conf 00:01:53.463 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.463 ++ SPDK_TEST_NVMF=1 00:01:53.463 ++ SPDK_TEST_NVME_CLI=1 00:01:53.463 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.463 ++ SPDK_TEST_NVMF_NICS=e810 00:01:53.463 ++ SPDK_TEST_VFIOUSER=1 00:01:53.463 ++ SPDK_RUN_UBSAN=1 00:01:53.463 ++ NET_TYPE=phy 00:01:53.463 ++ RUN_NIGHTLY=0 00:01:53.463 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:53.463 + [[ -n '' ]] 00:01:53.463 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:53.463 + for M in /var/spdk/build-*-manifest.txt 00:01:53.463 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:53.463 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:53.463 + for M in /var/spdk/build-*-manifest.txt 00:01:53.463 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:53.463 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:53.463 + for M in /var/spdk/build-*-manifest.txt 00:01:53.463 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:53.463 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:53.463 ++ uname 00:01:53.463 + [[ Linux == \L\i\n\u\x ]] 00:01:53.463 + sudo dmesg -T 00:01:53.463 + sudo dmesg --clear 00:01:53.463 + dmesg_pid=822111 00:01:53.463 + [[ Fedora Linux == FreeBSD ]] 00:01:53.463 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.463 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.463 + sudo dmesg -Tw 00:01:53.463 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:53.463 + [[ -x /usr/src/fio-static/fio ]] 00:01:53.463 + export FIO_BIN=/usr/src/fio-static/fio 00:01:53.463 + FIO_BIN=/usr/src/fio-static/fio 00:01:53.463 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:53.463 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:53.463 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:53.463 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.463 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.463 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:53.463 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.463 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.463 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:53.463 12:23:33 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:53.463 12:23:33 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:53.463 12:23:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.463 12:23:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:53.463 12:23:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:53.463 12:23:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.463 12:23:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:53.463 12:23:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:53.463 12:23:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:53.463 12:23:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:53.463 12:23:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:53.463 12:23:33 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:53.463 12:23:33 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:53.464 12:23:33 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:53.464 12:23:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:53.464 12:23:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:53.464 12:23:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:53.464 12:23:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.464 12:23:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.464 12:23:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.464 12:23:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.464 12:23:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.464 12:23:33 -- paths/export.sh@5 -- $ export PATH 00:01:53.464 12:23:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.464 12:23:33 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:53.464 12:23:33 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:53.464 12:23:33 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731669813.XXXXXX 00:01:53.464 12:23:33 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731669813.g0qMbY 00:01:53.464 12:23:33 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:53.464 12:23:33 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:53.464 12:23:33 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:53.464 12:23:33 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:53.464 12:23:33 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:53.464 12:23:33 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:53.464 12:23:33 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:53.464 12:23:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.464 12:23:33 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:53.464 12:23:33 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:53.464 12:23:33 -- pm/common@17 -- $ local monitor 00:01:53.464 12:23:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.464 12:23:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.464 12:23:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.464 12:23:33 -- pm/common@21 -- $ date +%s 00:01:53.464 12:23:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.464 12:23:33 -- pm/common@21 -- $ date +%s 00:01:53.464 12:23:33 -- pm/common@25 -- $ sleep 1 00:01:53.464 12:23:33 -- pm/common@21 -- $ date +%s 00:01:53.464 12:23:33 -- pm/common@21 -- $ date +%s 00:01:53.464 12:23:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731669813 00:01:53.464 12:23:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731669813 00:01:53.464 12:23:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731669813 00:01:53.464 12:23:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731669813 00:01:53.464 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731669813_collect-vmstat.pm.log 00:01:53.464 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731669813_collect-cpu-load.pm.log 00:01:53.464 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731669813_collect-cpu-temp.pm.log 00:01:53.464 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731669813_collect-bmc-pm.bmc.pm.log 00:01:54.849 12:23:34 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:54.849 12:23:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:54.849 12:23:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:54.849 12:23:34 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.849 12:23:34 -- spdk/autobuild.sh@16 -- $ date -u 00:01:54.849 Fri Nov 15 11:23:34 AM UTC 2024 00:01:54.849 12:23:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:54.849 v25.01-pre-188-gc46ddd981 00:01:54.849 12:23:34 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:54.849 12:23:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:54.849 12:23:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:54.849 12:23:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:54.849 12:23:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:54.849 12:23:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.849 ************************************ 00:01:54.849 START TEST ubsan 00:01:54.849 ************************************ 00:01:54.849 12:23:34 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:54.849 using ubsan 00:01:54.849 00:01:54.849 real 0m0.000s 00:01:54.849 user 0m0.000s 00:01:54.849 sys 0m0.000s 00:01:54.849 12:23:34 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:54.849 12:23:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:54.849 ************************************ 00:01:54.849 END TEST ubsan 00:01:54.849 ************************************ 00:01:54.849 12:23:34 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:54.849 12:23:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:54.849 12:23:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:54.849 12:23:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:54.849 12:23:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:54.849 12:23:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:54.849 12:23:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:54.849 12:23:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:54.849 12:23:34 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:54.849 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:54.849 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:55.108 Using 'verbs' RDMA provider 00:02:05.669 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:15.652 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:15.911 Creating mk/config.mk...done. 00:02:15.911 Creating mk/cc.flags.mk...done. 00:02:15.911 Type 'make' to build. 00:02:15.911 12:23:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:02:15.911 12:23:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:15.911 12:23:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:15.911 12:23:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.911 ************************************ 00:02:15.911 START TEST make 00:02:15.911 ************************************ 00:02:15.911 12:23:56 make -- common/autotest_common.sh@1129 -- $ make -j48 00:02:16.173 make[1]: Nothing to be done for 'all'. 00:02:18.092 The Meson build system 00:02:18.092 Version: 1.5.0 00:02:18.092 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:18.092 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:18.092 Build type: native build 00:02:18.092 Project name: libvfio-user 00:02:18.092 Project version: 0.0.1 00:02:18.092 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:18.092 C linker for the host machine: cc ld.bfd 2.40-14 00:02:18.092 Host machine cpu family: x86_64 00:02:18.092 Host machine cpu: x86_64 00:02:18.092 Run-time dependency threads found: YES 00:02:18.092 Library dl found: YES 00:02:18.092 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:18.092 Run-time dependency json-c found: YES 0.17 00:02:18.092 Run-time dependency cmocka found: YES 1.1.7 00:02:18.092 Program pytest-3 found: NO 00:02:18.092 Program flake8 found: NO 00:02:18.092 Program misspell-fixer found: NO 00:02:18.092 Program restructuredtext-lint found: NO 00:02:18.092 Program valgrind found: YES (/usr/bin/valgrind) 00:02:18.093 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:18.093 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:18.093 Compiler for C supports arguments -Wwrite-strings: YES 00:02:18.093 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:18.093 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:18.093 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:18.093 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:18.093 Build targets in project: 8 00:02:18.093 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:18.093 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:18.093 00:02:18.093 libvfio-user 0.0.1 00:02:18.093 00:02:18.093 User defined options 00:02:18.093 buildtype : debug 00:02:18.093 default_library: shared 00:02:18.093 libdir : /usr/local/lib 00:02:18.093 00:02:18.093 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:18.666 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:18.931 [1/37] Compiling C object samples/null.p/null.c.o 00:02:18.931 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:18.931 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:18.931 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:18.931 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:18.931 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:18.931 [7/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:18.931 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:18.931 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:18.931 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:18.931 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:18.931 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:18.931 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:19.192 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:19.192 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:19.192 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:19.192 [17/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:19.192 [18/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:19.192 [19/37] Compiling C object samples/server.p/server.c.o 00:02:19.192 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:19.192 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:19.192 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:19.192 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:19.192 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:19.192 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:19.192 [26/37] Compiling C object samples/client.p/client.c.o 00:02:19.192 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:19.192 [28/37] Linking target samples/client 00:02:19.192 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:19.192 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:19.453 [31/37] Linking target test/unit_tests 00:02:19.453 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:19.453 [33/37] Linking target samples/server 00:02:19.453 [34/37] Linking target samples/lspci 00:02:19.453 [35/37] Linking target samples/null 00:02:19.453 [36/37] Linking target samples/gpio-pci-idio-16 00:02:19.453 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:19.453 INFO: autodetecting backend as ninja 00:02:19.453 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:19.718 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:20.297 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:20.297 ninja: no work to do. 00:02:25.563 The Meson build system 00:02:25.563 Version: 1.5.0 00:02:25.563 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:25.563 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:25.563 Build type: native build 00:02:25.564 Program cat found: YES (/usr/bin/cat) 00:02:25.564 Project name: DPDK 00:02:25.564 Project version: 24.03.0 00:02:25.564 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:25.564 C linker for the host machine: cc ld.bfd 2.40-14 00:02:25.564 Host machine cpu family: x86_64 00:02:25.564 Host machine cpu: x86_64 00:02:25.564 Message: ## Building in Developer Mode ## 00:02:25.564 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:25.564 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:25.564 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:25.564 Program python3 found: YES (/usr/bin/python3) 00:02:25.564 Program cat found: YES (/usr/bin/cat) 00:02:25.564 Compiler for C supports arguments -march=native: YES 00:02:25.564 Checking for size of "void *" : 8 00:02:25.564 Checking for size of "void *" : 8 (cached) 00:02:25.564 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:25.564 Library m found: YES 00:02:25.564 Library numa found: YES 00:02:25.564 Has header "numaif.h" : YES 00:02:25.564 Library fdt found: NO 00:02:25.564 Library execinfo found: NO 00:02:25.564 Has header "execinfo.h" : YES 00:02:25.564 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:25.564 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:25.564 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:25.564 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:25.564 Run-time dependency openssl found: YES 3.1.1 00:02:25.564 Run-time dependency libpcap found: YES 1.10.4 00:02:25.564 Has header "pcap.h" with dependency libpcap: YES 00:02:25.564 Compiler for C supports arguments -Wcast-qual: YES 00:02:25.564 Compiler for C supports arguments -Wdeprecated: YES 00:02:25.564 Compiler for C supports arguments -Wformat: YES 00:02:25.564 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:25.564 Compiler for C supports arguments -Wformat-security: NO 00:02:25.564 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:25.564 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:25.564 Compiler for C supports arguments -Wnested-externs: YES 00:02:25.564 Compiler for C supports arguments -Wold-style-definition: YES 00:02:25.564 Compiler for C supports arguments -Wpointer-arith: YES 00:02:25.564 Compiler for C supports arguments -Wsign-compare: YES 00:02:25.564 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:25.564 Compiler for C supports arguments -Wundef: YES 00:02:25.564 Compiler for C supports arguments -Wwrite-strings: YES 00:02:25.564 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:25.564 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:25.564 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:25.564 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:25.564 Program objdump found: YES (/usr/bin/objdump) 00:02:25.564 Compiler for C supports arguments -mavx512f: YES 00:02:25.564 Checking if "AVX512 checking" compiles: YES 00:02:25.564 Fetching value of define "__SSE4_2__" : 1 00:02:25.564 Fetching value of define "__AES__" : 1 00:02:25.564 Fetching value of define "__AVX__" : 1 00:02:25.564 Fetching value of define "__AVX2__" : (undefined) 00:02:25.564 Fetching value of define "__AVX512BW__" : (undefined) 00:02:25.564 Fetching value of define "__AVX512CD__" : (undefined) 00:02:25.564 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:25.564 Fetching value of define "__AVX512F__" : (undefined) 00:02:25.564 Fetching value of define "__AVX512VL__" : (undefined) 00:02:25.564 Fetching value of define "__PCLMUL__" : 1 00:02:25.564 Fetching value of define "__RDRND__" : 1 00:02:25.564 Fetching value of define "__RDSEED__" : (undefined) 00:02:25.564 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:25.564 Fetching value of define "__znver1__" : (undefined) 00:02:25.564 Fetching value of define "__znver2__" : (undefined) 00:02:25.564 Fetching value of define "__znver3__" : (undefined) 00:02:25.564 Fetching value of define "__znver4__" : (undefined) 00:02:25.564 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:25.564 Message: lib/log: Defining dependency "log" 00:02:25.564 Message: lib/kvargs: Defining dependency "kvargs" 00:02:25.564 Message: lib/telemetry: Defining dependency "telemetry" 00:02:25.564 Checking for function "getentropy" : NO 00:02:25.564 Message: lib/eal: Defining dependency "eal" 00:02:25.564 Message: lib/ring: Defining dependency "ring" 00:02:25.564 Message: lib/rcu: Defining dependency "rcu" 00:02:25.564 Message: lib/mempool: Defining dependency "mempool" 00:02:25.564 Message: lib/mbuf: Defining dependency "mbuf" 00:02:25.564 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:25.564 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:25.564 Compiler for C supports arguments -mpclmul: YES 00:02:25.564 Compiler for C supports arguments -maes: YES 00:02:25.564 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:25.564 Compiler for C supports arguments -mavx512bw: YES 00:02:25.564 Compiler for C supports arguments -mavx512dq: YES 00:02:25.564 Compiler for C supports arguments -mavx512vl: YES 00:02:25.564 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:25.564 Compiler for C supports arguments -mavx2: YES 00:02:25.564 Compiler for C supports arguments -mavx: YES 00:02:25.564 Message: lib/net: Defining dependency "net" 00:02:25.564 Message: lib/meter: Defining dependency "meter" 00:02:25.564 Message: lib/ethdev: Defining dependency "ethdev" 00:02:25.564 Message: lib/pci: Defining dependency "pci" 00:02:25.564 Message: lib/cmdline: Defining dependency "cmdline" 00:02:25.564 Message: lib/hash: Defining dependency "hash" 00:02:25.564 Message: lib/timer: Defining dependency "timer" 00:02:25.564 Message: lib/compressdev: Defining dependency "compressdev" 00:02:25.564 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:25.564 Message: lib/dmadev: Defining dependency "dmadev" 00:02:25.564 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:25.564 Message: lib/power: Defining dependency "power" 00:02:25.564 Message: lib/reorder: Defining dependency "reorder" 00:02:25.564 Message: lib/security: Defining dependency "security" 00:02:25.564 Has header "linux/userfaultfd.h" : YES 00:02:25.564 Has header "linux/vduse.h" : YES 00:02:25.564 Message: lib/vhost: Defining dependency "vhost" 00:02:25.564 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:25.564 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:25.564 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:25.564 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:25.564 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:25.564 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:25.564 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:25.564 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:25.565 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:25.565 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:25.565 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:25.565 Configuring doxy-api-html.conf using configuration 00:02:25.565 Configuring doxy-api-man.conf using configuration 00:02:25.565 Program mandb found: YES (/usr/bin/mandb) 00:02:25.565 Program sphinx-build found: NO 00:02:25.565 Configuring rte_build_config.h using configuration 00:02:25.565 Message: 00:02:25.565 ================= 00:02:25.565 Applications Enabled 00:02:25.565 ================= 00:02:25.565 00:02:25.565 apps: 00:02:25.565 00:02:25.565 00:02:25.565 Message: 00:02:25.565 ================= 00:02:25.565 Libraries Enabled 00:02:25.565 ================= 00:02:25.565 00:02:25.565 libs: 00:02:25.565 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:25.565 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:25.565 cryptodev, dmadev, power, reorder, security, vhost, 00:02:25.565 00:02:25.565 Message: 00:02:25.565 =============== 00:02:25.565 Drivers Enabled 00:02:25.565 =============== 00:02:25.565 00:02:25.565 common: 00:02:25.565 00:02:25.565 bus: 00:02:25.565 pci, vdev, 00:02:25.565 mempool: 00:02:25.565 ring, 00:02:25.565 dma: 00:02:25.565 00:02:25.565 net: 00:02:25.565 00:02:25.565 crypto: 00:02:25.565 00:02:25.565 compress: 00:02:25.565 00:02:25.565 vdpa: 00:02:25.565 00:02:25.565 00:02:25.565 Message: 00:02:25.565 ================= 00:02:25.565 Content Skipped 00:02:25.565 ================= 00:02:25.565 00:02:25.565 apps: 00:02:25.565 dumpcap: explicitly disabled via build config 00:02:25.565 graph: explicitly disabled via build config 00:02:25.565 pdump: explicitly disabled via build config 00:02:25.565 proc-info: explicitly disabled via build config 00:02:25.565 test-acl: explicitly disabled via build config 00:02:25.565 test-bbdev: explicitly disabled via build config 00:02:25.565 test-cmdline: explicitly disabled via build config 00:02:25.565 test-compress-perf: explicitly disabled via build config 00:02:25.565 test-crypto-perf: explicitly disabled via build config 00:02:25.565 test-dma-perf: explicitly disabled via build config 00:02:25.565 test-eventdev: explicitly disabled via build config 00:02:25.565 test-fib: explicitly disabled via build config 00:02:25.565 test-flow-perf: explicitly disabled via build config 00:02:25.565 test-gpudev: explicitly disabled via build config 00:02:25.565 test-mldev: explicitly disabled via build config 00:02:25.565 test-pipeline: explicitly disabled via build config 00:02:25.565 test-pmd: explicitly disabled via build config 00:02:25.565 test-regex: explicitly disabled via build config 00:02:25.565 test-sad: explicitly disabled via build config 00:02:25.565 test-security-perf: explicitly disabled via build config 00:02:25.565 00:02:25.565 libs: 00:02:25.565 argparse: explicitly disabled via build config 00:02:25.565 metrics: explicitly disabled via build config 00:02:25.565 acl: explicitly disabled via build config 00:02:25.565 bbdev: explicitly disabled via build config 00:02:25.565 bitratestats: explicitly disabled via build config 00:02:25.565 bpf: explicitly disabled via build config 00:02:25.565 cfgfile: explicitly disabled via build config 00:02:25.565 distributor: explicitly disabled via build config 00:02:25.565 efd: explicitly disabled via build config 00:02:25.565 eventdev: explicitly disabled via build config 00:02:25.565 dispatcher: explicitly disabled via build config 00:02:25.565 gpudev: explicitly disabled via build config 00:02:25.565 gro: explicitly disabled via build config 00:02:25.565 gso: explicitly disabled via build config 00:02:25.565 ip_frag: explicitly disabled via build config 00:02:25.565 jobstats: explicitly disabled via build config 00:02:25.565 latencystats: explicitly disabled via build config 00:02:25.565 lpm: explicitly disabled via build config 00:02:25.565 member: explicitly disabled via build config 00:02:25.565 pcapng: explicitly disabled via build config 00:02:25.565 rawdev: explicitly disabled via build config 00:02:25.565 regexdev: explicitly disabled via build config 00:02:25.565 mldev: explicitly disabled via build config 00:02:25.565 rib: explicitly disabled via build config 00:02:25.565 sched: explicitly disabled via build config 00:02:25.565 stack: explicitly disabled via build config 00:02:25.565 ipsec: explicitly disabled via build config 00:02:25.565 pdcp: explicitly disabled via build config 00:02:25.565 fib: explicitly disabled via build config 00:02:25.565 port: explicitly disabled via build config 00:02:25.565 pdump: explicitly disabled via build config 00:02:25.565 table: explicitly disabled via build config 00:02:25.565 pipeline: explicitly disabled via build config 00:02:25.565 graph: explicitly disabled via build config 00:02:25.565 node: explicitly disabled via build config 00:02:25.565 00:02:25.565 drivers: 00:02:25.565 common/cpt: not in enabled drivers build config 00:02:25.565 common/dpaax: not in enabled drivers build config 00:02:25.565 common/iavf: not in enabled drivers build config 00:02:25.565 common/idpf: not in enabled drivers build config 00:02:25.565 common/ionic: not in enabled drivers build config 00:02:25.565 common/mvep: not in enabled drivers build config 00:02:25.565 common/octeontx: not in enabled drivers build config 00:02:25.565 bus/auxiliary: not in enabled drivers build config 00:02:25.565 bus/cdx: not in enabled drivers build config 00:02:25.565 bus/dpaa: not in enabled drivers build config 00:02:25.565 bus/fslmc: not in enabled drivers build config 00:02:25.565 bus/ifpga: not in enabled drivers build config 00:02:25.565 bus/platform: not in enabled drivers build config 00:02:25.565 bus/uacce: not in enabled drivers build config 00:02:25.565 bus/vmbus: not in enabled drivers build config 00:02:25.565 common/cnxk: not in enabled drivers build config 00:02:25.565 common/mlx5: not in enabled drivers build config 00:02:25.565 common/nfp: not in enabled drivers build config 00:02:25.565 common/nitrox: not in enabled drivers build config 00:02:25.565 common/qat: not in enabled drivers build config 00:02:25.565 common/sfc_efx: not in enabled drivers build config 00:02:25.565 mempool/bucket: not in enabled drivers build config 00:02:25.565 mempool/cnxk: not in enabled drivers build config 00:02:25.565 mempool/dpaa: not in enabled drivers build config 00:02:25.565 mempool/dpaa2: not in enabled drivers build config 00:02:25.565 mempool/octeontx: not in enabled drivers build config 00:02:25.566 mempool/stack: not in enabled drivers build config 00:02:25.566 dma/cnxk: not in enabled drivers build config 00:02:25.566 dma/dpaa: not in enabled drivers build config 00:02:25.566 dma/dpaa2: not in enabled drivers build config 00:02:25.566 dma/hisilicon: not in enabled drivers build config 00:02:25.566 dma/idxd: not in enabled drivers build config 00:02:25.566 dma/ioat: not in enabled drivers build config 00:02:25.566 dma/skeleton: not in enabled drivers build config 00:02:25.566 net/af_packet: not in enabled drivers build config 00:02:25.566 net/af_xdp: not in enabled drivers build config 00:02:25.566 net/ark: not in enabled drivers build config 00:02:25.566 net/atlantic: not in enabled drivers build config 00:02:25.566 net/avp: not in enabled drivers build config 00:02:25.566 net/axgbe: not in enabled drivers build config 00:02:25.566 net/bnx2x: not in enabled drivers build config 00:02:25.566 net/bnxt: not in enabled drivers build config 00:02:25.566 net/bonding: not in enabled drivers build config 00:02:25.566 net/cnxk: not in enabled drivers build config 00:02:25.566 net/cpfl: not in enabled drivers build config 00:02:25.566 net/cxgbe: not in enabled drivers build config 00:02:25.566 net/dpaa: not in enabled drivers build config 00:02:25.566 net/dpaa2: not in enabled drivers build config 00:02:25.566 net/e1000: not in enabled drivers build config 00:02:25.566 net/ena: not in enabled drivers build config 00:02:25.566 net/enetc: not in enabled drivers build config 00:02:25.566 net/enetfec: not in enabled drivers build config 00:02:25.566 net/enic: not in enabled drivers build config 00:02:25.566 net/failsafe: not in enabled drivers build config 00:02:25.566 net/fm10k: not in enabled drivers build config 00:02:25.566 net/gve: not in enabled drivers build config 00:02:25.566 net/hinic: not in enabled drivers build config 00:02:25.566 net/hns3: not in enabled drivers build config 00:02:25.566 net/i40e: not in enabled drivers build config 00:02:25.566 net/iavf: not in enabled drivers build config 00:02:25.566 net/ice: not in enabled drivers build config 00:02:25.566 net/idpf: not in enabled drivers build config 00:02:25.566 net/igc: not in enabled drivers build config 00:02:25.566 net/ionic: not in enabled drivers build config 00:02:25.566 net/ipn3ke: not in enabled drivers build config 00:02:25.566 net/ixgbe: not in enabled drivers build config 00:02:25.566 net/mana: not in enabled drivers build config 00:02:25.566 net/memif: not in enabled drivers build config 00:02:25.566 net/mlx4: not in enabled drivers build config 00:02:25.566 net/mlx5: not in enabled drivers build config 00:02:25.566 net/mvneta: not in enabled drivers build config 00:02:25.566 net/mvpp2: not in enabled drivers build config 00:02:25.566 net/netvsc: not in enabled drivers build config 00:02:25.566 net/nfb: not in enabled drivers build config 00:02:25.566 net/nfp: not in enabled drivers build config 00:02:25.566 net/ngbe: not in enabled drivers build config 00:02:25.566 net/null: not in enabled drivers build config 00:02:25.566 net/octeontx: not in enabled drivers build config 00:02:25.566 net/octeon_ep: not in enabled drivers build config 00:02:25.566 net/pcap: not in enabled drivers build config 00:02:25.566 net/pfe: not in enabled drivers build config 00:02:25.566 net/qede: not in enabled drivers build config 00:02:25.566 net/ring: not in enabled drivers build config 00:02:25.566 net/sfc: not in enabled drivers build config 00:02:25.566 net/softnic: not in enabled drivers build config 00:02:25.566 net/tap: not in enabled drivers build config 00:02:25.566 net/thunderx: not in enabled drivers build config 00:02:25.566 net/txgbe: not in enabled drivers build config 00:02:25.566 net/vdev_netvsc: not in enabled drivers build config 00:02:25.566 net/vhost: not in enabled drivers build config 00:02:25.566 net/virtio: not in enabled drivers build config 00:02:25.566 net/vmxnet3: not in enabled drivers build config 00:02:25.566 raw/*: missing internal dependency, "rawdev" 00:02:25.566 crypto/armv8: not in enabled drivers build config 00:02:25.566 crypto/bcmfs: not in enabled drivers build config 00:02:25.566 crypto/caam_jr: not in enabled drivers build config 00:02:25.566 crypto/ccp: not in enabled drivers build config 00:02:25.566 crypto/cnxk: not in enabled drivers build config 00:02:25.566 crypto/dpaa_sec: not in enabled drivers build config 00:02:25.566 crypto/dpaa2_sec: not in enabled drivers build config 00:02:25.566 crypto/ipsec_mb: not in enabled drivers build config 00:02:25.566 crypto/mlx5: not in enabled drivers build config 00:02:25.566 crypto/mvsam: not in enabled drivers build config 00:02:25.566 crypto/nitrox: not in enabled drivers build config 00:02:25.566 crypto/null: not in enabled drivers build config 00:02:25.566 crypto/octeontx: not in enabled drivers build config 00:02:25.566 crypto/openssl: not in enabled drivers build config 00:02:25.566 crypto/scheduler: not in enabled drivers build config 00:02:25.566 crypto/uadk: not in enabled drivers build config 00:02:25.566 crypto/virtio: not in enabled drivers build config 00:02:25.566 compress/isal: not in enabled drivers build config 00:02:25.566 compress/mlx5: not in enabled drivers build config 00:02:25.566 compress/nitrox: not in enabled drivers build config 00:02:25.566 compress/octeontx: not in enabled drivers build config 00:02:25.566 compress/zlib: not in enabled drivers build config 00:02:25.566 regex/*: missing internal dependency, "regexdev" 00:02:25.566 ml/*: missing internal dependency, "mldev" 00:02:25.566 vdpa/ifc: not in enabled drivers build config 00:02:25.566 vdpa/mlx5: not in enabled drivers build config 00:02:25.566 vdpa/nfp: not in enabled drivers build config 00:02:25.566 vdpa/sfc: not in enabled drivers build config 00:02:25.566 event/*: missing internal dependency, "eventdev" 00:02:25.566 baseband/*: missing internal dependency, "bbdev" 00:02:25.566 gpu/*: missing internal dependency, "gpudev" 00:02:25.566 00:02:25.566 00:02:25.566 Build targets in project: 85 00:02:25.566 00:02:25.566 DPDK 24.03.0 00:02:25.566 00:02:25.566 User defined options 00:02:25.566 buildtype : debug 00:02:25.566 default_library : shared 00:02:25.566 libdir : lib 00:02:25.566 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:25.566 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:25.566 c_link_args : 00:02:25.566 cpu_instruction_set: native 00:02:25.566 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:25.566 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:25.566 enable_docs : false 00:02:25.566 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:25.566 enable_kmods : false 00:02:25.566 max_lcores : 128 00:02:25.566 tests : false 00:02:25.567 00:02:25.567 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:26.140 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:26.140 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:26.140 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:26.140 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:26.140 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:26.140 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:26.140 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:26.140 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:26.140 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:26.140 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:26.140 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:26.140 [11/268] Linking static target lib/librte_kvargs.a 00:02:26.140 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:26.140 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:26.140 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:26.140 [15/268] Linking static target lib/librte_log.a 00:02:26.140 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:26.709 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.972 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:26.972 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:26.972 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.972 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:26.972 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:26.972 [23/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:26.972 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:26.972 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:26.972 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:26.972 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:26.972 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:26.972 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:26.972 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:26.972 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:26.972 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:26.972 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:26.972 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:26.972 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:26.972 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:26.972 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.972 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:26.972 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:26.972 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.972 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:26.972 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:26.972 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:26.972 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:26.972 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:26.972 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:26.972 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:26.972 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:26.972 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:26.972 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:26.972 [51/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:26.972 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:27.232 [53/268] Linking static target lib/librte_telemetry.a 00:02:27.232 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:27.232 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:27.232 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:27.232 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:27.232 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:27.232 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:27.232 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:27.232 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:27.232 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:27.232 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:27.492 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:27.492 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:27.492 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.492 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:27.492 [68/268] Linking target lib/librte_log.so.24.1 00:02:27.492 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:27.492 [70/268] Linking static target lib/librte_pci.a 00:02:27.754 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:27.754 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:27.754 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:27.754 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:27.754 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:27.754 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:27.754 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:27.754 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:28.015 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:28.015 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:28.015 [81/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:28.015 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:28.015 [83/268] Linking target lib/librte_kvargs.so.24.1 00:02:28.015 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:28.015 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:28.015 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:28.015 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:28.015 [88/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:28.015 [89/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:28.015 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:28.015 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:28.015 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:28.015 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:28.015 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:28.015 [95/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:28.015 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:28.015 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:28.015 [98/268] Linking static target lib/librte_meter.a 00:02:28.015 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:28.015 [100/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:28.015 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:28.015 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:28.015 [103/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:28.015 [104/268] Linking static target lib/librte_ring.a 00:02:28.015 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:28.277 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:28.277 [107/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.277 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:28.277 [109/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.277 [110/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:28.277 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:28.277 [112/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:28.277 [113/268] Linking static target lib/librte_eal.a 00:02:28.277 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:28.277 [115/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:28.277 [116/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:28.277 [117/268] Linking target lib/librte_telemetry.so.24.1 00:02:28.277 [118/268] Linking static target lib/librte_rcu.a 00:02:28.277 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:28.277 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:28.277 [121/268] Linking static target lib/librte_mempool.a 00:02:28.277 [122/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:28.277 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:28.277 [124/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:28.277 [125/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:28.277 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:28.538 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:28.538 [128/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:28.538 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:28.538 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:28.538 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:28.538 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:28.538 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:28.538 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:28.538 [135/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:28.803 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:28.803 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:28.803 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.803 [139/268] Linking static target lib/librte_net.a 00:02:28.803 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:28.803 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:28.803 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:28.803 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:28.803 [144/268] Linking static target lib/librte_cmdline.a 00:02:28.803 [145/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.062 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:29.062 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:29.062 [148/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:29.062 [149/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.062 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:29.062 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:29.062 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:29.062 [153/268] Linking static target lib/librte_timer.a 00:02:29.062 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:29.062 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:29.062 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:29.062 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.321 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:29.321 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:29.321 [160/268] Linking static target lib/librte_dmadev.a 00:02:29.321 [161/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:29.321 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:29.321 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:29.321 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:29.321 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:29.321 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:29.321 [167/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.580 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:29.580 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:29.580 [170/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.580 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:29.580 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:29.580 [173/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:29.580 [174/268] Linking static target lib/librte_power.a 00:02:29.580 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:29.580 [176/268] Linking static target lib/librte_compressdev.a 00:02:29.580 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:29.580 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:29.580 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:29.580 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:29.580 [181/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:29.580 [182/268] Linking static target lib/librte_hash.a 00:02:29.580 [183/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:29.580 [184/268] Linking static target lib/librte_reorder.a 00:02:29.838 [185/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.838 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:29.838 [187/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:29.838 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:29.838 [189/268] Linking static target lib/librte_mbuf.a 00:02:29.838 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:29.838 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.838 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:29.838 [193/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:29.838 [194/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:29.838 [195/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:29.838 [196/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:29.838 [197/268] Linking static target lib/librte_security.a 00:02:29.838 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:29.838 [199/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:30.096 [200/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.096 [201/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.096 [202/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:30.096 [203/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.096 [204/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:30.096 [205/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:30.096 [206/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.096 [207/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.096 [208/268] Linking static target drivers/librte_bus_vdev.a 00:02:30.096 [209/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:30.096 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:30.096 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:30.096 [212/268] Linking static target drivers/librte_bus_pci.a 00:02:30.096 [213/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.354 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:30.354 [215/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.354 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.354 [217/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:30.354 [218/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:30.354 [219/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:30.354 [220/268] Linking static target drivers/librte_mempool_ring.a 00:02:30.354 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.354 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:30.354 [223/268] Linking static target lib/librte_ethdev.a 00:02:30.354 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:30.354 [225/268] Linking static target lib/librte_cryptodev.a 00:02:30.611 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.545 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.917 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:34.817 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.817 [230/268] Linking target lib/librte_eal.so.24.1 00:02:34.817 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.817 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:34.817 [233/268] Linking target lib/librte_ring.so.24.1 00:02:34.817 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:34.817 [235/268] Linking target lib/librte_timer.so.24.1 00:02:34.817 [236/268] Linking target lib/librte_meter.so.24.1 00:02:34.817 [237/268] Linking target lib/librte_pci.so.24.1 00:02:34.817 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:35.075 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:35.075 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:35.075 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:35.075 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:35.075 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:35.075 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:35.075 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:35.075 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:35.075 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:35.075 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:35.075 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:35.075 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:35.333 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:35.333 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:35.333 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:35.333 [254/268] Linking target lib/librte_net.so.24.1 00:02:35.333 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:35.591 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:35.591 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:35.591 [258/268] Linking target lib/librte_security.so.24.1 00:02:35.591 [259/268] Linking target lib/librte_hash.so.24.1 00:02:35.591 [260/268] Linking target lib/librte_cmdline.so.24.1 00:02:35.591 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:35.591 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:35.591 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:35.850 [264/268] Linking target lib/librte_power.so.24.1 00:02:39.129 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:39.130 [266/268] Linking static target lib/librte_vhost.a 00:02:39.749 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.749 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:39.749 INFO: autodetecting backend as ninja 00:02:39.749 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:01.706 CC lib/log/log.o 00:03:01.706 CC lib/log/log_flags.o 00:03:01.706 CC lib/log/log_deprecated.o 00:03:01.706 CC lib/ut/ut.o 00:03:01.706 CC lib/ut_mock/mock.o 00:03:01.706 LIB libspdk_ut.a 00:03:01.706 LIB libspdk_log.a 00:03:01.706 LIB libspdk_ut_mock.a 00:03:01.706 SO libspdk_ut.so.2.0 00:03:01.706 SO libspdk_ut_mock.so.6.0 00:03:01.706 SO libspdk_log.so.7.1 00:03:01.706 SYMLINK libspdk_ut.so 00:03:01.706 SYMLINK libspdk_ut_mock.so 00:03:01.706 SYMLINK libspdk_log.so 00:03:01.706 CC lib/util/base64.o 00:03:01.706 CC lib/dma/dma.o 00:03:01.706 CC lib/util/bit_array.o 00:03:01.706 CC lib/util/cpuset.o 00:03:01.706 CXX lib/trace_parser/trace.o 00:03:01.706 CC lib/ioat/ioat.o 00:03:01.706 CC lib/util/crc16.o 00:03:01.706 CC lib/util/crc32.o 00:03:01.706 CC lib/util/crc32c.o 00:03:01.706 CC lib/util/crc32_ieee.o 00:03:01.706 CC lib/util/crc64.o 00:03:01.706 CC lib/util/dif.o 00:03:01.706 CC lib/util/fd.o 00:03:01.706 CC lib/util/fd_group.o 00:03:01.706 CC lib/util/hexlify.o 00:03:01.707 CC lib/util/file.o 00:03:01.707 CC lib/util/iov.o 00:03:01.707 CC lib/util/math.o 00:03:01.707 CC lib/util/net.o 00:03:01.707 CC lib/util/pipe.o 00:03:01.707 CC lib/util/strerror_tls.o 00:03:01.707 CC lib/util/string.o 00:03:01.707 CC lib/util/uuid.o 00:03:01.707 CC lib/util/xor.o 00:03:01.707 CC lib/util/md5.o 00:03:01.707 CC lib/util/zipf.o 00:03:01.707 CC lib/vfio_user/host/vfio_user_pci.o 00:03:01.707 CC lib/vfio_user/host/vfio_user.o 00:03:01.707 LIB libspdk_dma.a 00:03:01.707 SO libspdk_dma.so.5.0 00:03:01.707 SYMLINK libspdk_dma.so 00:03:01.707 LIB libspdk_ioat.a 00:03:01.707 SO libspdk_ioat.so.7.0 00:03:01.707 SYMLINK libspdk_ioat.so 00:03:01.707 LIB libspdk_vfio_user.a 00:03:01.707 SO libspdk_vfio_user.so.5.0 00:03:01.707 SYMLINK libspdk_vfio_user.so 00:03:01.707 LIB libspdk_util.a 00:03:01.707 SO libspdk_util.so.10.1 00:03:01.707 SYMLINK libspdk_util.so 00:03:01.707 CC lib/rdma_utils/rdma_utils.o 00:03:01.707 CC lib/json/json_parse.o 00:03:01.707 CC lib/vmd/vmd.o 00:03:01.707 CC lib/vmd/led.o 00:03:01.707 CC lib/json/json_util.o 00:03:01.707 CC lib/idxd/idxd.o 00:03:01.707 CC lib/conf/conf.o 00:03:01.707 CC lib/env_dpdk/env.o 00:03:01.707 CC lib/env_dpdk/memory.o 00:03:01.707 CC lib/json/json_write.o 00:03:01.707 CC lib/idxd/idxd_user.o 00:03:01.707 CC lib/env_dpdk/pci.o 00:03:01.707 CC lib/idxd/idxd_kernel.o 00:03:01.707 CC lib/env_dpdk/init.o 00:03:01.707 CC lib/env_dpdk/threads.o 00:03:01.707 CC lib/env_dpdk/pci_ioat.o 00:03:01.707 CC lib/env_dpdk/pci_virtio.o 00:03:01.707 CC lib/env_dpdk/pci_vmd.o 00:03:01.707 CC lib/env_dpdk/pci_idxd.o 00:03:01.707 CC lib/env_dpdk/pci_event.o 00:03:01.707 CC lib/env_dpdk/sigbus_handler.o 00:03:01.707 CC lib/env_dpdk/pci_dpdk.o 00:03:01.707 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:01.707 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:01.707 LIB libspdk_trace_parser.a 00:03:01.707 SO libspdk_trace_parser.so.6.0 00:03:01.707 SYMLINK libspdk_trace_parser.so 00:03:01.707 LIB libspdk_conf.a 00:03:01.707 SO libspdk_conf.so.6.0 00:03:01.707 LIB libspdk_rdma_utils.a 00:03:01.707 LIB libspdk_json.a 00:03:01.707 SO libspdk_rdma_utils.so.1.0 00:03:01.707 SYMLINK libspdk_conf.so 00:03:01.707 SO libspdk_json.so.6.0 00:03:01.707 SYMLINK libspdk_rdma_utils.so 00:03:01.707 SYMLINK libspdk_json.so 00:03:01.707 CC lib/rdma_provider/common.o 00:03:01.707 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:01.707 CC lib/jsonrpc/jsonrpc_server.o 00:03:01.707 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:01.707 CC lib/jsonrpc/jsonrpc_client.o 00:03:01.707 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:01.707 LIB libspdk_idxd.a 00:03:01.707 SO libspdk_idxd.so.12.1 00:03:01.707 LIB libspdk_vmd.a 00:03:01.707 SO libspdk_vmd.so.6.0 00:03:01.707 SYMLINK libspdk_idxd.so 00:03:01.707 SYMLINK libspdk_vmd.so 00:03:01.707 LIB libspdk_rdma_provider.a 00:03:01.707 SO libspdk_rdma_provider.so.7.0 00:03:01.707 LIB libspdk_jsonrpc.a 00:03:01.707 SYMLINK libspdk_rdma_provider.so 00:03:01.707 SO libspdk_jsonrpc.so.6.0 00:03:01.707 SYMLINK libspdk_jsonrpc.so 00:03:01.707 CC lib/rpc/rpc.o 00:03:01.964 LIB libspdk_rpc.a 00:03:01.964 SO libspdk_rpc.so.6.0 00:03:01.964 SYMLINK libspdk_rpc.so 00:03:01.964 CC lib/trace/trace.o 00:03:01.964 CC lib/trace/trace_flags.o 00:03:01.964 CC lib/keyring/keyring.o 00:03:01.964 CC lib/trace/trace_rpc.o 00:03:01.964 CC lib/keyring/keyring_rpc.o 00:03:01.964 CC lib/notify/notify.o 00:03:01.964 CC lib/notify/notify_rpc.o 00:03:02.222 LIB libspdk_notify.a 00:03:02.222 SO libspdk_notify.so.6.0 00:03:02.222 LIB libspdk_keyring.a 00:03:02.222 SYMLINK libspdk_notify.so 00:03:02.222 SO libspdk_keyring.so.2.0 00:03:02.479 LIB libspdk_trace.a 00:03:02.479 SO libspdk_trace.so.11.0 00:03:02.479 SYMLINK libspdk_keyring.so 00:03:02.479 SYMLINK libspdk_trace.so 00:03:02.479 LIB libspdk_env_dpdk.a 00:03:02.479 CC lib/sock/sock.o 00:03:02.479 CC lib/sock/sock_rpc.o 00:03:02.479 CC lib/thread/thread.o 00:03:02.479 CC lib/thread/iobuf.o 00:03:02.737 SO libspdk_env_dpdk.so.15.1 00:03:02.737 SYMLINK libspdk_env_dpdk.so 00:03:02.995 LIB libspdk_sock.a 00:03:02.995 SO libspdk_sock.so.10.0 00:03:02.995 SYMLINK libspdk_sock.so 00:03:03.253 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:03.253 CC lib/nvme/nvme_ctrlr.o 00:03:03.253 CC lib/nvme/nvme_fabric.o 00:03:03.253 CC lib/nvme/nvme_ns_cmd.o 00:03:03.253 CC lib/nvme/nvme_ns.o 00:03:03.253 CC lib/nvme/nvme_pcie_common.o 00:03:03.253 CC lib/nvme/nvme_pcie.o 00:03:03.253 CC lib/nvme/nvme_qpair.o 00:03:03.253 CC lib/nvme/nvme.o 00:03:03.253 CC lib/nvme/nvme_quirks.o 00:03:03.253 CC lib/nvme/nvme_transport.o 00:03:03.253 CC lib/nvme/nvme_discovery.o 00:03:03.253 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:03.253 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:03.253 CC lib/nvme/nvme_tcp.o 00:03:03.253 CC lib/nvme/nvme_opal.o 00:03:03.253 CC lib/nvme/nvme_io_msg.o 00:03:03.253 CC lib/nvme/nvme_poll_group.o 00:03:03.253 CC lib/nvme/nvme_zns.o 00:03:03.253 CC lib/nvme/nvme_stubs.o 00:03:03.253 CC lib/nvme/nvme_auth.o 00:03:03.253 CC lib/nvme/nvme_cuse.o 00:03:03.253 CC lib/nvme/nvme_vfio_user.o 00:03:03.253 CC lib/nvme/nvme_rdma.o 00:03:04.187 LIB libspdk_thread.a 00:03:04.187 SO libspdk_thread.so.11.0 00:03:04.187 SYMLINK libspdk_thread.so 00:03:04.445 CC lib/vfu_tgt/tgt_endpoint.o 00:03:04.445 CC lib/fsdev/fsdev.o 00:03:04.445 CC lib/blob/blobstore.o 00:03:04.445 CC lib/fsdev/fsdev_io.o 00:03:04.445 CC lib/vfu_tgt/tgt_rpc.o 00:03:04.445 CC lib/accel/accel.o 00:03:04.445 CC lib/init/json_config.o 00:03:04.445 CC lib/virtio/virtio.o 00:03:04.445 CC lib/accel/accel_rpc.o 00:03:04.445 CC lib/fsdev/fsdev_rpc.o 00:03:04.445 CC lib/blob/request.o 00:03:04.445 CC lib/init/subsystem.o 00:03:04.445 CC lib/accel/accel_sw.o 00:03:04.445 CC lib/blob/zeroes.o 00:03:04.445 CC lib/virtio/virtio_vhost_user.o 00:03:04.445 CC lib/init/subsystem_rpc.o 00:03:04.445 CC lib/virtio/virtio_vfio_user.o 00:03:04.445 CC lib/blob/blob_bs_dev.o 00:03:04.445 CC lib/init/rpc.o 00:03:04.445 CC lib/virtio/virtio_pci.o 00:03:04.703 LIB libspdk_init.a 00:03:04.703 SO libspdk_init.so.6.0 00:03:04.703 SYMLINK libspdk_init.so 00:03:04.962 LIB libspdk_virtio.a 00:03:04.962 LIB libspdk_vfu_tgt.a 00:03:04.962 SO libspdk_vfu_tgt.so.3.0 00:03:04.962 SO libspdk_virtio.so.7.0 00:03:04.962 CC lib/event/app.o 00:03:04.962 CC lib/event/reactor.o 00:03:04.962 CC lib/event/log_rpc.o 00:03:04.962 CC lib/event/app_rpc.o 00:03:04.962 CC lib/event/scheduler_static.o 00:03:04.962 SYMLINK libspdk_vfu_tgt.so 00:03:04.962 SYMLINK libspdk_virtio.so 00:03:05.220 LIB libspdk_fsdev.a 00:03:05.220 SO libspdk_fsdev.so.2.0 00:03:05.220 SYMLINK libspdk_fsdev.so 00:03:05.478 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:05.478 LIB libspdk_event.a 00:03:05.478 SO libspdk_event.so.14.0 00:03:05.478 SYMLINK libspdk_event.so 00:03:05.737 LIB libspdk_accel.a 00:03:05.737 SO libspdk_accel.so.16.0 00:03:05.737 LIB libspdk_nvme.a 00:03:05.737 SYMLINK libspdk_accel.so 00:03:05.737 SO libspdk_nvme.so.15.0 00:03:05.995 CC lib/bdev/bdev.o 00:03:05.995 CC lib/bdev/bdev_rpc.o 00:03:05.995 CC lib/bdev/bdev_zone.o 00:03:05.995 CC lib/bdev/part.o 00:03:05.995 CC lib/bdev/scsi_nvme.o 00:03:05.995 LIB libspdk_fuse_dispatcher.a 00:03:05.995 SYMLINK libspdk_nvme.so 00:03:05.995 SO libspdk_fuse_dispatcher.so.1.0 00:03:06.254 SYMLINK libspdk_fuse_dispatcher.so 00:03:07.630 LIB libspdk_blob.a 00:03:07.630 SO libspdk_blob.so.11.0 00:03:07.630 SYMLINK libspdk_blob.so 00:03:07.888 CC lib/blobfs/blobfs.o 00:03:07.888 CC lib/blobfs/tree.o 00:03:07.888 CC lib/lvol/lvol.o 00:03:08.451 LIB libspdk_bdev.a 00:03:08.451 SO libspdk_bdev.so.17.0 00:03:08.713 SYMLINK libspdk_bdev.so 00:03:08.713 LIB libspdk_blobfs.a 00:03:08.713 SO libspdk_blobfs.so.10.0 00:03:08.713 SYMLINK libspdk_blobfs.so 00:03:08.713 CC lib/nbd/nbd.o 00:03:08.713 CC lib/nbd/nbd_rpc.o 00:03:08.713 CC lib/scsi/dev.o 00:03:08.713 CC lib/ublk/ublk.o 00:03:08.713 CC lib/scsi/lun.o 00:03:08.713 CC lib/nvmf/ctrlr.o 00:03:08.713 CC lib/ublk/ublk_rpc.o 00:03:08.713 CC lib/scsi/port.o 00:03:08.713 CC lib/nvmf/ctrlr_discovery.o 00:03:08.713 CC lib/ftl/ftl_core.o 00:03:08.713 CC lib/scsi/scsi.o 00:03:08.713 CC lib/nvmf/ctrlr_bdev.o 00:03:08.713 CC lib/ftl/ftl_init.o 00:03:08.713 CC lib/scsi/scsi_bdev.o 00:03:08.713 CC lib/ftl/ftl_layout.o 00:03:08.713 CC lib/nvmf/subsystem.o 00:03:08.713 CC lib/scsi/scsi_pr.o 00:03:08.713 CC lib/nvmf/nvmf.o 00:03:08.713 CC lib/ftl/ftl_debug.o 00:03:08.713 CC lib/ftl/ftl_io.o 00:03:08.713 CC lib/scsi/scsi_rpc.o 00:03:08.713 CC lib/nvmf/nvmf_rpc.o 00:03:08.713 CC lib/ftl/ftl_sb.o 00:03:08.713 CC lib/nvmf/transport.o 00:03:08.713 CC lib/scsi/task.o 00:03:08.713 CC lib/ftl/ftl_l2p.o 00:03:08.713 CC lib/ftl/ftl_l2p_flat.o 00:03:08.713 CC lib/nvmf/tcp.o 00:03:08.713 CC lib/ftl/ftl_nv_cache.o 00:03:08.713 CC lib/nvmf/stubs.o 00:03:08.713 CC lib/ftl/ftl_band.o 00:03:08.713 CC lib/nvmf/mdns_server.o 00:03:08.713 CC lib/nvmf/vfio_user.o 00:03:08.713 CC lib/ftl/ftl_band_ops.o 00:03:08.713 CC lib/nvmf/rdma.o 00:03:08.713 CC lib/ftl/ftl_writer.o 00:03:08.713 CC lib/nvmf/auth.o 00:03:08.713 CC lib/ftl/ftl_rq.o 00:03:08.713 CC lib/ftl/ftl_reloc.o 00:03:08.713 CC lib/ftl/ftl_l2p_cache.o 00:03:08.713 CC lib/ftl/ftl_p2l.o 00:03:08.713 LIB libspdk_lvol.a 00:03:08.713 CC lib/ftl/ftl_p2l_log.o 00:03:08.713 CC lib/ftl/mngt/ftl_mngt.o 00:03:08.713 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:08.713 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:08.713 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:08.713 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:08.713 SO libspdk_lvol.so.10.0 00:03:08.973 SYMLINK libspdk_lvol.so 00:03:08.973 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:09.234 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:09.234 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:09.234 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:09.234 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:09.234 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:09.234 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:09.234 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:09.234 CC lib/ftl/utils/ftl_conf.o 00:03:09.234 CC lib/ftl/utils/ftl_md.o 00:03:09.234 CC lib/ftl/utils/ftl_mempool.o 00:03:09.234 CC lib/ftl/utils/ftl_bitmap.o 00:03:09.234 CC lib/ftl/utils/ftl_property.o 00:03:09.234 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:09.234 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:09.234 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:09.234 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:09.234 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:09.234 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:09.492 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:09.492 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:09.492 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:09.492 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:09.492 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:09.492 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:09.492 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:09.492 CC lib/ftl/base/ftl_base_dev.o 00:03:09.492 CC lib/ftl/base/ftl_base_bdev.o 00:03:09.492 CC lib/ftl/ftl_trace.o 00:03:09.749 LIB libspdk_nbd.a 00:03:09.749 SO libspdk_nbd.so.7.0 00:03:09.749 SYMLINK libspdk_nbd.so 00:03:09.749 LIB libspdk_scsi.a 00:03:09.749 SO libspdk_scsi.so.9.0 00:03:10.007 SYMLINK libspdk_scsi.so 00:03:10.007 LIB libspdk_ublk.a 00:03:10.007 SO libspdk_ublk.so.3.0 00:03:10.007 SYMLINK libspdk_ublk.so 00:03:10.007 CC lib/vhost/vhost.o 00:03:10.007 CC lib/iscsi/conn.o 00:03:10.007 CC lib/vhost/vhost_rpc.o 00:03:10.007 CC lib/iscsi/init_grp.o 00:03:10.007 CC lib/vhost/vhost_scsi.o 00:03:10.007 CC lib/iscsi/iscsi.o 00:03:10.007 CC lib/vhost/vhost_blk.o 00:03:10.007 CC lib/iscsi/param.o 00:03:10.007 CC lib/vhost/rte_vhost_user.o 00:03:10.007 CC lib/iscsi/portal_grp.o 00:03:10.007 CC lib/iscsi/tgt_node.o 00:03:10.007 CC lib/iscsi/iscsi_subsystem.o 00:03:10.007 CC lib/iscsi/iscsi_rpc.o 00:03:10.007 CC lib/iscsi/task.o 00:03:10.266 LIB libspdk_ftl.a 00:03:10.524 SO libspdk_ftl.so.9.0 00:03:10.782 SYMLINK libspdk_ftl.so 00:03:11.347 LIB libspdk_vhost.a 00:03:11.347 SO libspdk_vhost.so.8.0 00:03:11.347 SYMLINK libspdk_vhost.so 00:03:11.347 LIB libspdk_nvmf.a 00:03:11.605 SO libspdk_nvmf.so.20.0 00:03:11.605 LIB libspdk_iscsi.a 00:03:11.605 SO libspdk_iscsi.so.8.0 00:03:11.605 SYMLINK libspdk_iscsi.so 00:03:11.605 SYMLINK libspdk_nvmf.so 00:03:11.863 CC module/env_dpdk/env_dpdk_rpc.o 00:03:11.863 CC module/vfu_device/vfu_virtio.o 00:03:11.863 CC module/vfu_device/vfu_virtio_blk.o 00:03:11.863 CC module/vfu_device/vfu_virtio_scsi.o 00:03:11.863 CC module/vfu_device/vfu_virtio_rpc.o 00:03:11.863 CC module/vfu_device/vfu_virtio_fs.o 00:03:12.121 CC module/accel/error/accel_error.o 00:03:12.121 CC module/accel/ioat/accel_ioat.o 00:03:12.121 CC module/accel/ioat/accel_ioat_rpc.o 00:03:12.121 CC module/accel/error/accel_error_rpc.o 00:03:12.121 CC module/sock/posix/posix.o 00:03:12.121 CC module/keyring/file/keyring.o 00:03:12.121 CC module/keyring/file/keyring_rpc.o 00:03:12.121 CC module/accel/iaa/accel_iaa.o 00:03:12.121 CC module/accel/dsa/accel_dsa_rpc.o 00:03:12.121 CC module/accel/dsa/accel_dsa.o 00:03:12.121 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:12.121 CC module/fsdev/aio/fsdev_aio.o 00:03:12.121 CC module/accel/iaa/accel_iaa_rpc.o 00:03:12.121 CC module/blob/bdev/blob_bdev.o 00:03:12.121 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:12.121 CC module/scheduler/gscheduler/gscheduler.o 00:03:12.121 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:12.121 CC module/fsdev/aio/linux_aio_mgr.o 00:03:12.121 CC module/keyring/linux/keyring.o 00:03:12.121 CC module/keyring/linux/keyring_rpc.o 00:03:12.121 LIB libspdk_env_dpdk_rpc.a 00:03:12.121 SO libspdk_env_dpdk_rpc.so.6.0 00:03:12.121 SYMLINK libspdk_env_dpdk_rpc.so 00:03:12.121 LIB libspdk_keyring_file.a 00:03:12.378 LIB libspdk_keyring_linux.a 00:03:12.378 LIB libspdk_scheduler_gscheduler.a 00:03:12.378 LIB libspdk_scheduler_dpdk_governor.a 00:03:12.378 SO libspdk_keyring_file.so.2.0 00:03:12.378 SO libspdk_keyring_linux.so.1.0 00:03:12.378 SO libspdk_scheduler_gscheduler.so.4.0 00:03:12.378 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:12.378 LIB libspdk_accel_ioat.a 00:03:12.378 LIB libspdk_scheduler_dynamic.a 00:03:12.378 SO libspdk_accel_ioat.so.6.0 00:03:12.378 LIB libspdk_accel_iaa.a 00:03:12.378 SYMLINK libspdk_keyring_file.so 00:03:12.378 LIB libspdk_accel_error.a 00:03:12.378 SYMLINK libspdk_keyring_linux.so 00:03:12.378 SO libspdk_scheduler_dynamic.so.4.0 00:03:12.378 SYMLINK libspdk_scheduler_gscheduler.so 00:03:12.378 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:12.378 SO libspdk_accel_iaa.so.3.0 00:03:12.378 SO libspdk_accel_error.so.2.0 00:03:12.378 SYMLINK libspdk_accel_ioat.so 00:03:12.378 SYMLINK libspdk_scheduler_dynamic.so 00:03:12.378 SYMLINK libspdk_accel_error.so 00:03:12.378 SYMLINK libspdk_accel_iaa.so 00:03:12.379 LIB libspdk_accel_dsa.a 00:03:12.379 SO libspdk_accel_dsa.so.5.0 00:03:12.379 LIB libspdk_blob_bdev.a 00:03:12.379 SYMLINK libspdk_accel_dsa.so 00:03:12.379 SO libspdk_blob_bdev.so.11.0 00:03:12.636 SYMLINK libspdk_blob_bdev.so 00:03:12.636 LIB libspdk_vfu_device.a 00:03:12.636 SO libspdk_vfu_device.so.3.0 00:03:12.896 CC module/bdev/delay/vbdev_delay.o 00:03:12.896 CC module/bdev/gpt/gpt.o 00:03:12.896 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:12.896 CC module/bdev/null/bdev_null.o 00:03:12.896 CC module/bdev/malloc/bdev_malloc.o 00:03:12.896 CC module/bdev/gpt/vbdev_gpt.o 00:03:12.896 CC module/bdev/null/bdev_null_rpc.o 00:03:12.896 CC module/bdev/error/vbdev_error.o 00:03:12.896 CC module/bdev/error/vbdev_error_rpc.o 00:03:12.896 CC module/bdev/passthru/vbdev_passthru.o 00:03:12.896 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:12.896 CC module/bdev/lvol/vbdev_lvol.o 00:03:12.896 CC module/bdev/nvme/bdev_nvme.o 00:03:12.896 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:12.896 SYMLINK libspdk_vfu_device.so 00:03:12.896 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:12.896 CC module/bdev/raid/bdev_raid.o 00:03:12.896 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:12.896 CC module/bdev/raid/bdev_raid_rpc.o 00:03:12.896 CC module/bdev/raid/bdev_raid_sb.o 00:03:12.896 CC module/bdev/ftl/bdev_ftl.o 00:03:12.896 CC module/bdev/split/vbdev_split.o 00:03:12.896 CC module/bdev/nvme/nvme_rpc.o 00:03:12.896 CC module/blobfs/bdev/blobfs_bdev.o 00:03:12.896 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:12.896 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:12.896 CC module/bdev/nvme/bdev_mdns_client.o 00:03:12.896 CC module/bdev/split/vbdev_split_rpc.o 00:03:12.896 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:12.896 CC module/bdev/raid/raid1.o 00:03:12.896 CC module/bdev/raid/raid0.o 00:03:12.896 CC module/bdev/nvme/vbdev_opal.o 00:03:12.896 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:12.896 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:12.896 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:12.896 CC module/bdev/raid/concat.o 00:03:12.896 CC module/bdev/aio/bdev_aio_rpc.o 00:03:12.896 CC module/bdev/aio/bdev_aio.o 00:03:12.896 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:12.896 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:12.896 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:12.896 CC module/bdev/iscsi/bdev_iscsi.o 00:03:12.896 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:12.896 LIB libspdk_fsdev_aio.a 00:03:12.896 SO libspdk_fsdev_aio.so.1.0 00:03:12.896 LIB libspdk_sock_posix.a 00:03:12.896 SYMLINK libspdk_fsdev_aio.so 00:03:12.896 SO libspdk_sock_posix.so.6.0 00:03:13.154 SYMLINK libspdk_sock_posix.so 00:03:13.154 LIB libspdk_blobfs_bdev.a 00:03:13.154 SO libspdk_blobfs_bdev.so.6.0 00:03:13.154 LIB libspdk_bdev_zone_block.a 00:03:13.154 SO libspdk_bdev_zone_block.so.6.0 00:03:13.154 LIB libspdk_bdev_null.a 00:03:13.154 LIB libspdk_bdev_split.a 00:03:13.154 SYMLINK libspdk_blobfs_bdev.so 00:03:13.418 SO libspdk_bdev_null.so.6.0 00:03:13.418 SO libspdk_bdev_split.so.6.0 00:03:13.418 SYMLINK libspdk_bdev_zone_block.so 00:03:13.418 LIB libspdk_bdev_passthru.a 00:03:13.418 LIB libspdk_bdev_gpt.a 00:03:13.418 LIB libspdk_bdev_error.a 00:03:13.418 LIB libspdk_bdev_ftl.a 00:03:13.418 SO libspdk_bdev_gpt.so.6.0 00:03:13.418 SO libspdk_bdev_passthru.so.6.0 00:03:13.418 SYMLINK libspdk_bdev_split.so 00:03:13.418 SYMLINK libspdk_bdev_null.so 00:03:13.418 SO libspdk_bdev_error.so.6.0 00:03:13.418 SO libspdk_bdev_ftl.so.6.0 00:03:13.418 LIB libspdk_bdev_malloc.a 00:03:13.418 SYMLINK libspdk_bdev_gpt.so 00:03:13.418 SYMLINK libspdk_bdev_passthru.so 00:03:13.418 SYMLINK libspdk_bdev_error.so 00:03:13.418 SYMLINK libspdk_bdev_ftl.so 00:03:13.418 LIB libspdk_bdev_aio.a 00:03:13.418 SO libspdk_bdev_malloc.so.6.0 00:03:13.418 LIB libspdk_bdev_lvol.a 00:03:13.418 LIB libspdk_bdev_iscsi.a 00:03:13.418 SO libspdk_bdev_aio.so.6.0 00:03:13.418 SO libspdk_bdev_lvol.so.6.0 00:03:13.418 LIB libspdk_bdev_delay.a 00:03:13.418 SO libspdk_bdev_iscsi.so.6.0 00:03:13.418 SYMLINK libspdk_bdev_malloc.so 00:03:13.418 SO libspdk_bdev_delay.so.6.0 00:03:13.418 SYMLINK libspdk_bdev_aio.so 00:03:13.418 SYMLINK libspdk_bdev_lvol.so 00:03:13.418 SYMLINK libspdk_bdev_iscsi.so 00:03:13.418 SYMLINK libspdk_bdev_delay.so 00:03:13.676 LIB libspdk_bdev_virtio.a 00:03:13.676 SO libspdk_bdev_virtio.so.6.0 00:03:13.676 SYMLINK libspdk_bdev_virtio.so 00:03:14.241 LIB libspdk_bdev_raid.a 00:03:14.241 SO libspdk_bdev_raid.so.6.0 00:03:14.241 SYMLINK libspdk_bdev_raid.so 00:03:15.613 LIB libspdk_bdev_nvme.a 00:03:15.613 SO libspdk_bdev_nvme.so.7.1 00:03:15.613 SYMLINK libspdk_bdev_nvme.so 00:03:15.871 CC module/event/subsystems/fsdev/fsdev.o 00:03:15.871 CC module/event/subsystems/iobuf/iobuf.o 00:03:15.871 CC module/event/subsystems/keyring/keyring.o 00:03:15.871 CC module/event/subsystems/sock/sock.o 00:03:15.871 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:15.871 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:15.871 CC module/event/subsystems/vmd/vmd.o 00:03:15.871 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:15.871 CC module/event/subsystems/scheduler/scheduler.o 00:03:15.871 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:16.128 LIB libspdk_event_keyring.a 00:03:16.128 LIB libspdk_event_fsdev.a 00:03:16.128 LIB libspdk_event_vhost_blk.a 00:03:16.128 LIB libspdk_event_vfu_tgt.a 00:03:16.128 LIB libspdk_event_scheduler.a 00:03:16.128 LIB libspdk_event_vmd.a 00:03:16.128 LIB libspdk_event_sock.a 00:03:16.128 SO libspdk_event_fsdev.so.1.0 00:03:16.128 SO libspdk_event_keyring.so.1.0 00:03:16.128 LIB libspdk_event_iobuf.a 00:03:16.128 SO libspdk_event_vhost_blk.so.3.0 00:03:16.128 SO libspdk_event_vfu_tgt.so.3.0 00:03:16.128 SO libspdk_event_scheduler.so.4.0 00:03:16.128 SO libspdk_event_sock.so.5.0 00:03:16.128 SO libspdk_event_vmd.so.6.0 00:03:16.128 SO libspdk_event_iobuf.so.3.0 00:03:16.128 SYMLINK libspdk_event_keyring.so 00:03:16.128 SYMLINK libspdk_event_fsdev.so 00:03:16.128 SYMLINK libspdk_event_vhost_blk.so 00:03:16.128 SYMLINK libspdk_event_scheduler.so 00:03:16.128 SYMLINK libspdk_event_vfu_tgt.so 00:03:16.128 SYMLINK libspdk_event_sock.so 00:03:16.128 SYMLINK libspdk_event_vmd.so 00:03:16.128 SYMLINK libspdk_event_iobuf.so 00:03:16.386 CC module/event/subsystems/accel/accel.o 00:03:16.386 LIB libspdk_event_accel.a 00:03:16.645 SO libspdk_event_accel.so.6.0 00:03:16.645 SYMLINK libspdk_event_accel.so 00:03:16.645 CC module/event/subsystems/bdev/bdev.o 00:03:16.903 LIB libspdk_event_bdev.a 00:03:16.903 SO libspdk_event_bdev.so.6.0 00:03:16.903 SYMLINK libspdk_event_bdev.so 00:03:17.161 CC module/event/subsystems/nbd/nbd.o 00:03:17.161 CC module/event/subsystems/scsi/scsi.o 00:03:17.161 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:17.161 CC module/event/subsystems/ublk/ublk.o 00:03:17.161 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:17.419 LIB libspdk_event_ublk.a 00:03:17.419 LIB libspdk_event_nbd.a 00:03:17.419 LIB libspdk_event_scsi.a 00:03:17.419 SO libspdk_event_nbd.so.6.0 00:03:17.419 SO libspdk_event_ublk.so.3.0 00:03:17.419 SO libspdk_event_scsi.so.6.0 00:03:17.419 SYMLINK libspdk_event_nbd.so 00:03:17.419 SYMLINK libspdk_event_ublk.so 00:03:17.419 SYMLINK libspdk_event_scsi.so 00:03:17.419 LIB libspdk_event_nvmf.a 00:03:17.419 SO libspdk_event_nvmf.so.6.0 00:03:17.419 SYMLINK libspdk_event_nvmf.so 00:03:17.676 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:17.676 CC module/event/subsystems/iscsi/iscsi.o 00:03:17.676 LIB libspdk_event_vhost_scsi.a 00:03:17.676 SO libspdk_event_vhost_scsi.so.3.0 00:03:17.676 LIB libspdk_event_iscsi.a 00:03:17.676 SO libspdk_event_iscsi.so.6.0 00:03:17.676 SYMLINK libspdk_event_vhost_scsi.so 00:03:17.934 SYMLINK libspdk_event_iscsi.so 00:03:17.934 SO libspdk.so.6.0 00:03:17.934 SYMLINK libspdk.so 00:03:18.200 CXX app/trace/trace.o 00:03:18.200 CC app/trace_record/trace_record.o 00:03:18.200 CC test/rpc_client/rpc_client_test.o 00:03:18.200 CC app/spdk_top/spdk_top.o 00:03:18.200 CC app/spdk_nvme_perf/perf.o 00:03:18.200 CC app/spdk_nvme_identify/identify.o 00:03:18.200 TEST_HEADER include/spdk/accel.h 00:03:18.200 CC app/spdk_lspci/spdk_lspci.o 00:03:18.200 TEST_HEADER include/spdk/accel_module.h 00:03:18.200 CC app/spdk_nvme_discover/discovery_aer.o 00:03:18.200 TEST_HEADER include/spdk/assert.h 00:03:18.200 TEST_HEADER include/spdk/barrier.h 00:03:18.200 TEST_HEADER include/spdk/base64.h 00:03:18.200 TEST_HEADER include/spdk/bdev.h 00:03:18.200 TEST_HEADER include/spdk/bdev_module.h 00:03:18.200 TEST_HEADER include/spdk/bdev_zone.h 00:03:18.200 TEST_HEADER include/spdk/bit_array.h 00:03:18.200 TEST_HEADER include/spdk/bit_pool.h 00:03:18.200 TEST_HEADER include/spdk/blob_bdev.h 00:03:18.200 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:18.200 TEST_HEADER include/spdk/blobfs.h 00:03:18.200 TEST_HEADER include/spdk/blob.h 00:03:18.200 TEST_HEADER include/spdk/conf.h 00:03:18.200 TEST_HEADER include/spdk/config.h 00:03:18.200 TEST_HEADER include/spdk/cpuset.h 00:03:18.200 TEST_HEADER include/spdk/crc16.h 00:03:18.200 TEST_HEADER include/spdk/crc64.h 00:03:18.200 TEST_HEADER include/spdk/crc32.h 00:03:18.200 TEST_HEADER include/spdk/dif.h 00:03:18.200 TEST_HEADER include/spdk/endian.h 00:03:18.200 TEST_HEADER include/spdk/dma.h 00:03:18.200 TEST_HEADER include/spdk/env_dpdk.h 00:03:18.200 TEST_HEADER include/spdk/env.h 00:03:18.200 TEST_HEADER include/spdk/event.h 00:03:18.200 TEST_HEADER include/spdk/fd_group.h 00:03:18.200 TEST_HEADER include/spdk/file.h 00:03:18.200 TEST_HEADER include/spdk/fd.h 00:03:18.200 TEST_HEADER include/spdk/fsdev.h 00:03:18.200 TEST_HEADER include/spdk/fsdev_module.h 00:03:18.200 TEST_HEADER include/spdk/ftl.h 00:03:18.200 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:18.200 TEST_HEADER include/spdk/gpt_spec.h 00:03:18.200 TEST_HEADER include/spdk/hexlify.h 00:03:18.200 TEST_HEADER include/spdk/histogram_data.h 00:03:18.200 TEST_HEADER include/spdk/idxd_spec.h 00:03:18.200 TEST_HEADER include/spdk/idxd.h 00:03:18.200 TEST_HEADER include/spdk/init.h 00:03:18.200 TEST_HEADER include/spdk/ioat.h 00:03:18.200 TEST_HEADER include/spdk/ioat_spec.h 00:03:18.200 TEST_HEADER include/spdk/json.h 00:03:18.200 TEST_HEADER include/spdk/iscsi_spec.h 00:03:18.200 TEST_HEADER include/spdk/keyring.h 00:03:18.200 TEST_HEADER include/spdk/jsonrpc.h 00:03:18.200 TEST_HEADER include/spdk/keyring_module.h 00:03:18.200 TEST_HEADER include/spdk/likely.h 00:03:18.200 TEST_HEADER include/spdk/log.h 00:03:18.200 TEST_HEADER include/spdk/lvol.h 00:03:18.200 TEST_HEADER include/spdk/md5.h 00:03:18.200 TEST_HEADER include/spdk/memory.h 00:03:18.200 TEST_HEADER include/spdk/mmio.h 00:03:18.200 TEST_HEADER include/spdk/nbd.h 00:03:18.200 TEST_HEADER include/spdk/net.h 00:03:18.200 TEST_HEADER include/spdk/notify.h 00:03:18.200 TEST_HEADER include/spdk/nvme.h 00:03:18.200 TEST_HEADER include/spdk/nvme_intel.h 00:03:18.200 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:18.200 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:18.200 TEST_HEADER include/spdk/nvme_spec.h 00:03:18.200 TEST_HEADER include/spdk/nvme_zns.h 00:03:18.200 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:18.200 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:18.200 TEST_HEADER include/spdk/nvmf.h 00:03:18.200 TEST_HEADER include/spdk/nvmf_spec.h 00:03:18.200 TEST_HEADER include/spdk/nvmf_transport.h 00:03:18.200 TEST_HEADER include/spdk/opal.h 00:03:18.200 TEST_HEADER include/spdk/opal_spec.h 00:03:18.200 TEST_HEADER include/spdk/pci_ids.h 00:03:18.200 TEST_HEADER include/spdk/pipe.h 00:03:18.200 TEST_HEADER include/spdk/reduce.h 00:03:18.200 TEST_HEADER include/spdk/queue.h 00:03:18.200 TEST_HEADER include/spdk/rpc.h 00:03:18.200 TEST_HEADER include/spdk/scsi.h 00:03:18.200 TEST_HEADER include/spdk/scheduler.h 00:03:18.200 TEST_HEADER include/spdk/scsi_spec.h 00:03:18.200 TEST_HEADER include/spdk/sock.h 00:03:18.200 TEST_HEADER include/spdk/stdinc.h 00:03:18.200 TEST_HEADER include/spdk/string.h 00:03:18.200 TEST_HEADER include/spdk/thread.h 00:03:18.200 TEST_HEADER include/spdk/trace.h 00:03:18.200 TEST_HEADER include/spdk/tree.h 00:03:18.200 TEST_HEADER include/spdk/trace_parser.h 00:03:18.200 TEST_HEADER include/spdk/ublk.h 00:03:18.200 TEST_HEADER include/spdk/util.h 00:03:18.200 TEST_HEADER include/spdk/version.h 00:03:18.200 TEST_HEADER include/spdk/uuid.h 00:03:18.200 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:18.200 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:18.200 TEST_HEADER include/spdk/vhost.h 00:03:18.200 CC app/spdk_dd/spdk_dd.o 00:03:18.201 TEST_HEADER include/spdk/vmd.h 00:03:18.201 TEST_HEADER include/spdk/xor.h 00:03:18.201 TEST_HEADER include/spdk/zipf.h 00:03:18.201 CXX test/cpp_headers/accel.o 00:03:18.201 CXX test/cpp_headers/accel_module.o 00:03:18.201 CXX test/cpp_headers/assert.o 00:03:18.201 CXX test/cpp_headers/barrier.o 00:03:18.201 CXX test/cpp_headers/base64.o 00:03:18.201 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:18.201 CXX test/cpp_headers/bdev.o 00:03:18.201 CXX test/cpp_headers/bdev_module.o 00:03:18.201 CXX test/cpp_headers/bdev_zone.o 00:03:18.201 CXX test/cpp_headers/bit_array.o 00:03:18.201 CXX test/cpp_headers/bit_pool.o 00:03:18.201 CXX test/cpp_headers/blob_bdev.o 00:03:18.201 CXX test/cpp_headers/blobfs_bdev.o 00:03:18.201 CXX test/cpp_headers/blobfs.o 00:03:18.201 CXX test/cpp_headers/blob.o 00:03:18.201 CXX test/cpp_headers/conf.o 00:03:18.201 CXX test/cpp_headers/config.o 00:03:18.201 CXX test/cpp_headers/cpuset.o 00:03:18.201 CXX test/cpp_headers/crc16.o 00:03:18.201 CC app/iscsi_tgt/iscsi_tgt.o 00:03:18.201 CC app/nvmf_tgt/nvmf_main.o 00:03:18.201 CXX test/cpp_headers/crc32.o 00:03:18.201 CC test/env/memory/memory_ut.o 00:03:18.201 CC examples/ioat/verify/verify.o 00:03:18.201 CC test/app/histogram_perf/histogram_perf.o 00:03:18.201 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:18.201 CC app/spdk_tgt/spdk_tgt.o 00:03:18.201 CC examples/util/zipf/zipf.o 00:03:18.201 CC test/app/jsoncat/jsoncat.o 00:03:18.201 CC examples/ioat/perf/perf.o 00:03:18.201 CC test/env/vtophys/vtophys.o 00:03:18.201 CC test/thread/poller_perf/poller_perf.o 00:03:18.201 CC app/fio/nvme/fio_plugin.o 00:03:18.201 CC test/env/pci/pci_ut.o 00:03:18.201 CC test/app/stub/stub.o 00:03:18.201 CC test/dma/test_dma/test_dma.o 00:03:18.463 CC app/fio/bdev/fio_plugin.o 00:03:18.463 CC test/app/bdev_svc/bdev_svc.o 00:03:18.463 LINK spdk_lspci 00:03:18.463 CC test/env/mem_callbacks/mem_callbacks.o 00:03:18.463 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:18.463 LINK rpc_client_test 00:03:18.724 LINK jsoncat 00:03:18.724 LINK spdk_nvme_discover 00:03:18.724 LINK histogram_perf 00:03:18.724 LINK interrupt_tgt 00:03:18.724 LINK zipf 00:03:18.724 CXX test/cpp_headers/crc64.o 00:03:18.724 CXX test/cpp_headers/dif.o 00:03:18.724 LINK vtophys 00:03:18.724 LINK env_dpdk_post_init 00:03:18.724 CXX test/cpp_headers/dma.o 00:03:18.724 CXX test/cpp_headers/endian.o 00:03:18.724 LINK nvmf_tgt 00:03:18.724 CXX test/cpp_headers/env_dpdk.o 00:03:18.724 LINK poller_perf 00:03:18.724 CXX test/cpp_headers/env.o 00:03:18.724 CXX test/cpp_headers/event.o 00:03:18.724 LINK spdk_trace_record 00:03:18.724 CXX test/cpp_headers/fd_group.o 00:03:18.724 CXX test/cpp_headers/fd.o 00:03:18.724 CXX test/cpp_headers/file.o 00:03:18.724 CXX test/cpp_headers/fsdev.o 00:03:18.724 CXX test/cpp_headers/fsdev_module.o 00:03:18.724 CXX test/cpp_headers/ftl.o 00:03:18.724 LINK stub 00:03:18.724 LINK iscsi_tgt 00:03:18.724 CXX test/cpp_headers/fuse_dispatcher.o 00:03:18.724 CXX test/cpp_headers/gpt_spec.o 00:03:18.724 LINK bdev_svc 00:03:18.724 LINK verify 00:03:18.724 LINK ioat_perf 00:03:18.724 CXX test/cpp_headers/hexlify.o 00:03:18.724 CXX test/cpp_headers/histogram_data.o 00:03:18.724 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:18.724 LINK spdk_tgt 00:03:18.982 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:18.982 CXX test/cpp_headers/idxd.o 00:03:18.982 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:18.982 CXX test/cpp_headers/idxd_spec.o 00:03:18.982 CXX test/cpp_headers/init.o 00:03:18.982 CXX test/cpp_headers/ioat.o 00:03:18.982 CXX test/cpp_headers/ioat_spec.o 00:03:18.982 CXX test/cpp_headers/iscsi_spec.o 00:03:18.982 CXX test/cpp_headers/json.o 00:03:18.982 LINK spdk_dd 00:03:18.982 LINK spdk_trace 00:03:18.982 CXX test/cpp_headers/jsonrpc.o 00:03:18.982 CXX test/cpp_headers/keyring.o 00:03:18.982 CXX test/cpp_headers/keyring_module.o 00:03:18.982 CXX test/cpp_headers/likely.o 00:03:18.982 CXX test/cpp_headers/log.o 00:03:18.982 CXX test/cpp_headers/lvol.o 00:03:18.982 CXX test/cpp_headers/md5.o 00:03:18.982 LINK pci_ut 00:03:19.243 CXX test/cpp_headers/memory.o 00:03:19.243 CXX test/cpp_headers/mmio.o 00:03:19.243 CXX test/cpp_headers/nbd.o 00:03:19.243 CXX test/cpp_headers/net.o 00:03:19.243 CXX test/cpp_headers/notify.o 00:03:19.243 CXX test/cpp_headers/nvme.o 00:03:19.243 CXX test/cpp_headers/nvme_intel.o 00:03:19.243 CXX test/cpp_headers/nvme_ocssd.o 00:03:19.243 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:19.243 CXX test/cpp_headers/nvme_spec.o 00:03:19.243 CXX test/cpp_headers/nvme_zns.o 00:03:19.243 CXX test/cpp_headers/nvmf_cmd.o 00:03:19.243 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:19.243 LINK nvme_fuzz 00:03:19.243 CXX test/cpp_headers/nvmf.o 00:03:19.243 CC examples/sock/hello_world/hello_sock.o 00:03:19.243 CXX test/cpp_headers/nvmf_spec.o 00:03:19.243 CXX test/cpp_headers/nvmf_transport.o 00:03:19.243 LINK test_dma 00:03:19.243 CC examples/thread/thread/thread_ex.o 00:03:19.507 CXX test/cpp_headers/opal.o 00:03:19.507 CXX test/cpp_headers/opal_spec.o 00:03:19.507 CC examples/idxd/perf/perf.o 00:03:19.507 CXX test/cpp_headers/pci_ids.o 00:03:19.507 CC test/event/event_perf/event_perf.o 00:03:19.507 LINK spdk_bdev 00:03:19.507 LINK spdk_nvme 00:03:19.507 CC examples/vmd/led/led.o 00:03:19.507 CC test/event/reactor/reactor.o 00:03:19.507 CC examples/vmd/lsvmd/lsvmd.o 00:03:19.507 CXX test/cpp_headers/pipe.o 00:03:19.507 CC test/event/reactor_perf/reactor_perf.o 00:03:19.507 CXX test/cpp_headers/queue.o 00:03:19.507 CXX test/cpp_headers/reduce.o 00:03:19.507 CC test/event/app_repeat/app_repeat.o 00:03:19.507 CXX test/cpp_headers/rpc.o 00:03:19.507 CXX test/cpp_headers/scheduler.o 00:03:19.507 CXX test/cpp_headers/scsi.o 00:03:19.507 CXX test/cpp_headers/scsi_spec.o 00:03:19.507 CXX test/cpp_headers/sock.o 00:03:19.507 CXX test/cpp_headers/stdinc.o 00:03:19.507 CXX test/cpp_headers/string.o 00:03:19.507 CXX test/cpp_headers/thread.o 00:03:19.507 CC test/event/scheduler/scheduler.o 00:03:19.507 CXX test/cpp_headers/trace.o 00:03:19.507 CXX test/cpp_headers/trace_parser.o 00:03:19.507 CXX test/cpp_headers/tree.o 00:03:19.507 CXX test/cpp_headers/ublk.o 00:03:19.507 CXX test/cpp_headers/util.o 00:03:19.507 CXX test/cpp_headers/uuid.o 00:03:19.769 CXX test/cpp_headers/version.o 00:03:19.769 CXX test/cpp_headers/vfio_user_pci.o 00:03:19.769 CXX test/cpp_headers/vfio_user_spec.o 00:03:19.769 CXX test/cpp_headers/vhost.o 00:03:19.769 CXX test/cpp_headers/vmd.o 00:03:19.769 CXX test/cpp_headers/xor.o 00:03:19.769 CXX test/cpp_headers/zipf.o 00:03:19.769 LINK spdk_nvme_perf 00:03:19.769 LINK mem_callbacks 00:03:19.769 LINK event_perf 00:03:19.769 LINK lsvmd 00:03:19.769 LINK reactor 00:03:19.769 LINK vhost_fuzz 00:03:19.769 LINK led 00:03:19.769 LINK spdk_nvme_identify 00:03:19.769 LINK reactor_perf 00:03:19.769 CC app/vhost/vhost.o 00:03:19.769 LINK hello_sock 00:03:19.769 LINK app_repeat 00:03:19.769 LINK spdk_top 00:03:19.769 LINK thread 00:03:20.028 CC test/nvme/e2edp/nvme_dp.o 00:03:20.028 LINK scheduler 00:03:20.028 CC test/nvme/startup/startup.o 00:03:20.028 CC test/nvme/sgl/sgl.o 00:03:20.028 CC test/nvme/aer/aer.o 00:03:20.028 CC test/nvme/reset/reset.o 00:03:20.028 CC test/nvme/overhead/overhead.o 00:03:20.028 CC test/nvme/err_injection/err_injection.o 00:03:20.028 CC test/nvme/simple_copy/simple_copy.o 00:03:20.028 CC test/nvme/reserve/reserve.o 00:03:20.028 CC test/nvme/boot_partition/boot_partition.o 00:03:20.028 CC test/nvme/compliance/nvme_compliance.o 00:03:20.028 CC test/nvme/connect_stress/connect_stress.o 00:03:20.028 LINK idxd_perf 00:03:20.028 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:20.028 CC test/nvme/fused_ordering/fused_ordering.o 00:03:20.028 CC test/blobfs/mkfs/mkfs.o 00:03:20.028 CC test/accel/dif/dif.o 00:03:20.028 CC test/nvme/cuse/cuse.o 00:03:20.028 CC test/nvme/fdp/fdp.o 00:03:20.287 LINK vhost 00:03:20.287 CC test/lvol/esnap/esnap.o 00:03:20.287 LINK boot_partition 00:03:20.287 LINK startup 00:03:20.287 LINK err_injection 00:03:20.287 LINK connect_stress 00:03:20.287 LINK doorbell_aers 00:03:20.287 CC examples/nvme/reconnect/reconnect.o 00:03:20.287 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.287 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:20.287 CC examples/nvme/hello_world/hello_world.o 00:03:20.287 CC examples/nvme/arbitration/arbitration.o 00:03:20.287 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:20.287 CC examples/nvme/hotplug/hotplug.o 00:03:20.287 CC examples/nvme/abort/abort.o 00:03:20.287 LINK reserve 00:03:20.287 LINK fused_ordering 00:03:20.287 LINK simple_copy 00:03:20.287 LINK mkfs 00:03:20.545 LINK reset 00:03:20.545 CC examples/accel/perf/accel_perf.o 00:03:20.545 LINK overhead 00:03:20.545 LINK sgl 00:03:20.545 LINK aer 00:03:20.545 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:20.545 LINK memory_ut 00:03:20.545 CC examples/blob/cli/blobcli.o 00:03:20.545 LINK nvme_compliance 00:03:20.545 CC examples/blob/hello_world/hello_blob.o 00:03:20.545 LINK nvme_dp 00:03:20.545 LINK fdp 00:03:20.545 LINK hello_world 00:03:20.545 LINK pmr_persistence 00:03:20.802 LINK cmb_copy 00:03:20.802 LINK hotplug 00:03:20.802 LINK hello_blob 00:03:20.802 LINK arbitration 00:03:20.802 LINK hello_fsdev 00:03:20.802 LINK reconnect 00:03:20.802 LINK abort 00:03:21.059 LINK dif 00:03:21.060 LINK nvme_manage 00:03:21.060 LINK accel_perf 00:03:21.060 LINK blobcli 00:03:21.317 LINK iscsi_fuzz 00:03:21.317 CC test/bdev/bdevio/bdevio.o 00:03:21.317 CC examples/bdev/hello_world/hello_bdev.o 00:03:21.317 CC examples/bdev/bdevperf/bdevperf.o 00:03:21.575 LINK hello_bdev 00:03:21.833 LINK cuse 00:03:21.833 LINK bdevio 00:03:22.091 LINK bdevperf 00:03:22.657 CC examples/nvmf/nvmf/nvmf.o 00:03:22.916 LINK nvmf 00:03:25.445 LINK esnap 00:03:25.703 00:03:25.703 real 1m9.827s 00:03:25.703 user 11m51.307s 00:03:25.703 sys 2m39.677s 00:03:25.703 12:25:05 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:25.703 12:25:05 make -- common/autotest_common.sh@10 -- $ set +x 00:03:25.703 ************************************ 00:03:25.703 END TEST make 00:03:25.703 ************************************ 00:03:25.703 12:25:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:25.703 12:25:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:25.703 12:25:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:25.703 12:25:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.703 12:25:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:25.703 12:25:05 -- pm/common@44 -- $ pid=822153 00:03:25.703 12:25:05 -- pm/common@50 -- $ kill -TERM 822153 00:03:25.703 12:25:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.703 12:25:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:25.703 12:25:05 -- pm/common@44 -- $ pid=822155 00:03:25.703 12:25:05 -- pm/common@50 -- $ kill -TERM 822155 00:03:25.703 12:25:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.703 12:25:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:25.703 12:25:05 -- pm/common@44 -- $ pid=822157 00:03:25.703 12:25:05 -- pm/common@50 -- $ kill -TERM 822157 00:03:25.703 12:25:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.703 12:25:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:25.703 12:25:05 -- pm/common@44 -- $ pid=822185 00:03:25.703 12:25:05 -- pm/common@50 -- $ sudo -E kill -TERM 822185 00:03:25.703 12:25:05 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:25.703 12:25:05 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:25.703 12:25:06 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:25.703 12:25:06 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:25.703 12:25:06 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:25.961 12:25:06 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:25.961 12:25:06 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:25.961 12:25:06 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:25.961 12:25:06 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:25.961 12:25:06 -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.961 12:25:06 -- scripts/common.sh@336 -- # read -ra ver1 00:03:25.961 12:25:06 -- scripts/common.sh@337 -- # IFS=.-: 00:03:25.961 12:25:06 -- scripts/common.sh@337 -- # read -ra ver2 00:03:25.961 12:25:06 -- scripts/common.sh@338 -- # local 'op=<' 00:03:25.961 12:25:06 -- scripts/common.sh@340 -- # ver1_l=2 00:03:25.961 12:25:06 -- scripts/common.sh@341 -- # ver2_l=1 00:03:25.961 12:25:06 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:25.961 12:25:06 -- scripts/common.sh@344 -- # case "$op" in 00:03:25.961 12:25:06 -- scripts/common.sh@345 -- # : 1 00:03:25.961 12:25:06 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:25.961 12:25:06 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.961 12:25:06 -- scripts/common.sh@365 -- # decimal 1 00:03:25.961 12:25:06 -- scripts/common.sh@353 -- # local d=1 00:03:25.961 12:25:06 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.961 12:25:06 -- scripts/common.sh@355 -- # echo 1 00:03:25.961 12:25:06 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:25.961 12:25:06 -- scripts/common.sh@366 -- # decimal 2 00:03:25.961 12:25:06 -- scripts/common.sh@353 -- # local d=2 00:03:25.961 12:25:06 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.961 12:25:06 -- scripts/common.sh@355 -- # echo 2 00:03:25.961 12:25:06 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:25.961 12:25:06 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:25.961 12:25:06 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:25.961 12:25:06 -- scripts/common.sh@368 -- # return 0 00:03:25.961 12:25:06 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.961 12:25:06 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:25.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.961 --rc genhtml_branch_coverage=1 00:03:25.961 --rc genhtml_function_coverage=1 00:03:25.961 --rc genhtml_legend=1 00:03:25.961 --rc geninfo_all_blocks=1 00:03:25.961 --rc geninfo_unexecuted_blocks=1 00:03:25.961 00:03:25.962 ' 00:03:25.962 12:25:06 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:25.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.962 --rc genhtml_branch_coverage=1 00:03:25.962 --rc genhtml_function_coverage=1 00:03:25.962 --rc genhtml_legend=1 00:03:25.962 --rc geninfo_all_blocks=1 00:03:25.962 --rc geninfo_unexecuted_blocks=1 00:03:25.962 00:03:25.962 ' 00:03:25.962 12:25:06 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:25.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.962 --rc genhtml_branch_coverage=1 00:03:25.962 --rc genhtml_function_coverage=1 00:03:25.962 --rc genhtml_legend=1 00:03:25.962 --rc geninfo_all_blocks=1 00:03:25.962 --rc geninfo_unexecuted_blocks=1 00:03:25.962 00:03:25.962 ' 00:03:25.962 12:25:06 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:25.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.962 --rc genhtml_branch_coverage=1 00:03:25.962 --rc genhtml_function_coverage=1 00:03:25.962 --rc genhtml_legend=1 00:03:25.962 --rc geninfo_all_blocks=1 00:03:25.962 --rc geninfo_unexecuted_blocks=1 00:03:25.962 00:03:25.962 ' 00:03:25.962 12:25:06 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:25.962 12:25:06 -- nvmf/common.sh@7 -- # uname -s 00:03:25.962 12:25:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:25.962 12:25:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:25.962 12:25:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:25.962 12:25:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:25.962 12:25:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:25.962 12:25:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:25.962 12:25:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:25.962 12:25:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:25.962 12:25:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:25.962 12:25:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:25.962 12:25:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:25.962 12:25:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:25.962 12:25:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:25.962 12:25:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:25.962 12:25:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:25.962 12:25:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:25.962 12:25:06 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:25.962 12:25:06 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:25.962 12:25:06 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:25.962 12:25:06 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:25.962 12:25:06 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:25.962 12:25:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.962 12:25:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.962 12:25:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.962 12:25:06 -- paths/export.sh@5 -- # export PATH 00:03:25.962 12:25:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.962 12:25:06 -- nvmf/common.sh@51 -- # : 0 00:03:25.962 12:25:06 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:25.962 12:25:06 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:25.962 12:25:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:25.962 12:25:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:25.962 12:25:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:25.962 12:25:06 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:25.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:25.962 12:25:06 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:25.962 12:25:06 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:25.962 12:25:06 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:25.962 12:25:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.962 12:25:06 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.962 12:25:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:25.962 12:25:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:25.962 12:25:06 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:25.962 12:25:06 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:25.962 12:25:06 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:25.962 12:25:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:25.962 12:25:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:25.962 12:25:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:25.962 12:25:06 -- spdk/autotest.sh@48 -- # udevadm_pid=882202 00:03:25.962 12:25:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:25.962 12:25:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:25.962 12:25:06 -- pm/common@17 -- # local monitor 00:03:25.962 12:25:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.962 12:25:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.962 12:25:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.962 12:25:06 -- pm/common@21 -- # date +%s 00:03:25.962 12:25:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.962 12:25:06 -- pm/common@21 -- # date +%s 00:03:25.962 12:25:06 -- pm/common@25 -- # sleep 1 00:03:25.962 12:25:06 -- pm/common@21 -- # date +%s 00:03:25.962 12:25:06 -- pm/common@21 -- # date +%s 00:03:25.962 12:25:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731669906 00:03:25.962 12:25:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731669906 00:03:25.962 12:25:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731669906 00:03:25.962 12:25:06 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731669906 00:03:25.962 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731669906_collect-cpu-load.pm.log 00:03:25.962 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731669906_collect-vmstat.pm.log 00:03:25.962 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731669906_collect-cpu-temp.pm.log 00:03:25.962 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731669906_collect-bmc-pm.bmc.pm.log 00:03:26.897 12:25:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:26.897 12:25:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:26.897 12:25:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:26.897 12:25:07 -- common/autotest_common.sh@10 -- # set +x 00:03:26.897 12:25:07 -- spdk/autotest.sh@59 -- # create_test_list 00:03:26.897 12:25:07 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:26.897 12:25:07 -- common/autotest_common.sh@10 -- # set +x 00:03:26.897 12:25:07 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:26.897 12:25:07 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:26.897 12:25:07 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:26.897 12:25:07 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:26.897 12:25:07 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:26.897 12:25:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:26.897 12:25:07 -- common/autotest_common.sh@1457 -- # uname 00:03:26.897 12:25:07 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:26.897 12:25:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:26.897 12:25:07 -- common/autotest_common.sh@1477 -- # uname 00:03:26.897 12:25:07 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:26.897 12:25:07 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:26.897 12:25:07 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:27.154 lcov: LCOV version 1.15 00:03:27.154 12:25:07 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:53.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:53.778 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:05.982 12:25:44 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:05.982 12:25:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.982 12:25:44 -- common/autotest_common.sh@10 -- # set +x 00:04:05.982 12:25:44 -- spdk/autotest.sh@78 -- # rm -f 00:04:05.982 12:25:44 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.982 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:05.982 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:05.982 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:05.982 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:05.982 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:05.982 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:05.982 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:05.982 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:05.982 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:05.982 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:05.982 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:05.982 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:05.982 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:05.982 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:05.982 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:05.982 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:05.982 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:05.982 12:25:45 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:05.982 12:25:45 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:05.982 12:25:45 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:05.982 12:25:45 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:05.982 12:25:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:05.982 12:25:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:05.982 12:25:45 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:05.982 12:25:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:05.982 12:25:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.982 12:25:45 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:05.982 12:25:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.982 12:25:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.982 12:25:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:05.982 12:25:45 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:05.982 12:25:45 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:05.982 No valid GPT data, bailing 00:04:05.982 12:25:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:05.982 12:25:45 -- scripts/common.sh@394 -- # pt= 00:04:05.982 12:25:45 -- scripts/common.sh@395 -- # return 1 00:04:05.982 12:25:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:05.982 1+0 records in 00:04:05.982 1+0 records out 00:04:05.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00224371 s, 467 MB/s 00:04:05.982 12:25:45 -- spdk/autotest.sh@105 -- # sync 00:04:05.982 12:25:45 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:05.982 12:25:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:05.982 12:25:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:07.886 12:25:47 -- spdk/autotest.sh@111 -- # uname -s 00:04:07.886 12:25:47 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:07.886 12:25:47 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:07.886 12:25:47 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:08.821 Hugepages 00:04:08.821 node hugesize free / total 00:04:08.821 node0 1048576kB 0 / 0 00:04:08.821 node0 2048kB 0 / 0 00:04:08.821 node1 1048576kB 0 / 0 00:04:08.821 node1 2048kB 0 / 0 00:04:08.821 00:04:08.821 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.821 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:08.821 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:08.821 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:08.821 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:08.821 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:08.821 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:08.821 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:08.821 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:08.821 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:08.821 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:08.821 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:08.821 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:08.821 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:08.821 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:08.821 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:08.822 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:08.822 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:08.822 12:25:49 -- spdk/autotest.sh@117 -- # uname -s 00:04:09.080 12:25:49 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:09.080 12:25:49 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:09.080 12:25:49 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.015 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:10.274 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:10.274 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:10.274 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:10.274 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:10.274 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:10.274 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:10.274 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:10.274 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:10.274 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:10.274 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:10.274 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:10.274 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:10.274 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:10.274 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:10.274 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:11.210 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:11.210 12:25:51 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:12.589 12:25:52 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:12.589 12:25:52 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:12.589 12:25:52 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:12.589 12:25:52 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:12.589 12:25:52 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:12.589 12:25:52 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:12.589 12:25:52 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:12.589 12:25:52 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:12.589 12:25:52 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:12.589 12:25:52 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:12.589 12:25:52 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:04:12.589 12:25:52 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.524 Waiting for block devices as requested 00:04:13.524 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:13.783 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:13.783 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:14.042 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:14.042 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:14.042 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:14.042 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:14.301 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:14.301 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:14.301 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:14.559 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:14.559 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:14.559 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:14.559 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:14.818 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:14.818 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:14.818 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:15.077 12:25:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:15.077 12:25:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:15.077 12:25:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:15.077 12:25:55 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:04:15.077 12:25:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:15.077 12:25:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:15.077 12:25:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:15.077 12:25:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:15.077 12:25:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:15.077 12:25:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:15.077 12:25:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:15.077 12:25:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:15.077 12:25:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:15.077 12:25:55 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:15.077 12:25:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:15.077 12:25:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:15.077 12:25:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:15.077 12:25:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:15.077 12:25:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:15.077 12:25:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:15.077 12:25:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:15.077 12:25:55 -- common/autotest_common.sh@1543 -- # continue 00:04:15.077 12:25:55 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:15.077 12:25:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.077 12:25:55 -- common/autotest_common.sh@10 -- # set +x 00:04:15.077 12:25:55 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:15.077 12:25:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.077 12:25:55 -- common/autotest_common.sh@10 -- # set +x 00:04:15.077 12:25:55 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.453 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:16.453 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:16.453 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:16.453 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:16.453 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:16.453 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:16.453 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:16.453 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:16.453 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:16.453 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:16.453 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:16.453 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:16.453 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:16.453 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:16.453 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:16.453 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:17.394 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:17.394 12:25:57 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:17.394 12:25:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:17.394 12:25:57 -- common/autotest_common.sh@10 -- # set +x 00:04:17.394 12:25:57 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:17.394 12:25:57 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:17.394 12:25:57 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:17.394 12:25:57 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:17.394 12:25:57 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:17.394 12:25:57 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:17.394 12:25:57 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:17.394 12:25:57 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:17.394 12:25:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:17.394 12:25:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:17.394 12:25:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:17.394 12:25:57 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:17.394 12:25:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:17.661 12:25:57 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:17.661 12:25:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:04:17.661 12:25:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:17.661 12:25:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:17.661 12:25:57 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:17.661 12:25:57 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:17.661 12:25:57 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:17.661 12:25:57 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:17.661 12:25:57 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:04:17.661 12:25:57 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:04:17.661 12:25:57 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=892585 00:04:17.661 12:25:57 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.661 12:25:57 -- common/autotest_common.sh@1585 -- # waitforlisten 892585 00:04:17.661 12:25:57 -- common/autotest_common.sh@835 -- # '[' -z 892585 ']' 00:04:17.661 12:25:57 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.661 12:25:57 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.661 12:25:57 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.661 12:25:57 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.661 12:25:57 -- common/autotest_common.sh@10 -- # set +x 00:04:17.661 [2024-11-15 12:25:57.811550] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:04:17.661 [2024-11-15 12:25:57.811629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892585 ] 00:04:17.661 [2024-11-15 12:25:57.876566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.661 [2024-11-15 12:25:57.937264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.919 12:25:58 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.919 12:25:58 -- common/autotest_common.sh@868 -- # return 0 00:04:17.919 12:25:58 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:17.919 12:25:58 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:17.919 12:25:58 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:21.210 nvme0n1 00:04:21.210 12:26:01 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:21.469 [2024-11-15 12:26:01.566233] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:21.469 [2024-11-15 12:26:01.566282] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:21.469 request: 00:04:21.469 { 00:04:21.469 "nvme_ctrlr_name": "nvme0", 00:04:21.469 "password": "test", 00:04:21.469 "method": "bdev_nvme_opal_revert", 00:04:21.469 "req_id": 1 00:04:21.469 } 00:04:21.469 Got JSON-RPC error response 00:04:21.469 response: 00:04:21.469 { 00:04:21.469 "code": -32603, 00:04:21.469 "message": "Internal error" 00:04:21.469 } 00:04:21.469 12:26:01 -- common/autotest_common.sh@1591 -- # true 00:04:21.470 12:26:01 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:21.470 12:26:01 -- common/autotest_common.sh@1595 -- # killprocess 892585 00:04:21.470 12:26:01 -- common/autotest_common.sh@954 -- # '[' -z 892585 ']' 00:04:21.470 12:26:01 -- common/autotest_common.sh@958 -- # kill -0 892585 00:04:21.470 12:26:01 -- common/autotest_common.sh@959 -- # uname 00:04:21.470 12:26:01 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.470 12:26:01 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 892585 00:04:21.470 12:26:01 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.470 12:26:01 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.470 12:26:01 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 892585' 00:04:21.470 killing process with pid 892585 00:04:21.470 12:26:01 -- common/autotest_common.sh@973 -- # kill 892585 00:04:21.470 12:26:01 -- common/autotest_common.sh@978 -- # wait 892585 00:04:23.486 12:26:03 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:23.486 12:26:03 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:23.486 12:26:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:23.486 12:26:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:23.486 12:26:03 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:23.486 12:26:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.486 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:04:23.486 12:26:03 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:23.486 12:26:03 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:23.486 12:26:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.486 12:26:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.486 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:04:23.486 ************************************ 00:04:23.486 START TEST env 00:04:23.486 ************************************ 00:04:23.486 12:26:03 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:23.486 * Looking for test storage... 00:04:23.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:23.486 12:26:03 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.486 12:26:03 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.486 12:26:03 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.486 12:26:03 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.486 12:26:03 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.486 12:26:03 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.486 12:26:03 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.487 12:26:03 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.487 12:26:03 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.487 12:26:03 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.487 12:26:03 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.487 12:26:03 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.487 12:26:03 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.487 12:26:03 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.487 12:26:03 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.487 12:26:03 env -- scripts/common.sh@344 -- # case "$op" in 00:04:23.487 12:26:03 env -- scripts/common.sh@345 -- # : 1 00:04:23.487 12:26:03 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.487 12:26:03 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.487 12:26:03 env -- scripts/common.sh@365 -- # decimal 1 00:04:23.487 12:26:03 env -- scripts/common.sh@353 -- # local d=1 00:04:23.487 12:26:03 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.487 12:26:03 env -- scripts/common.sh@355 -- # echo 1 00:04:23.487 12:26:03 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.487 12:26:03 env -- scripts/common.sh@366 -- # decimal 2 00:04:23.487 12:26:03 env -- scripts/common.sh@353 -- # local d=2 00:04:23.487 12:26:03 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.487 12:26:03 env -- scripts/common.sh@355 -- # echo 2 00:04:23.487 12:26:03 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.487 12:26:03 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.487 12:26:03 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.487 12:26:03 env -- scripts/common.sh@368 -- # return 0 00:04:23.487 12:26:03 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.487 12:26:03 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.487 --rc genhtml_branch_coverage=1 00:04:23.487 --rc genhtml_function_coverage=1 00:04:23.487 --rc genhtml_legend=1 00:04:23.487 --rc geninfo_all_blocks=1 00:04:23.487 --rc geninfo_unexecuted_blocks=1 00:04:23.487 00:04:23.487 ' 00:04:23.487 12:26:03 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.487 --rc genhtml_branch_coverage=1 00:04:23.487 --rc genhtml_function_coverage=1 00:04:23.487 --rc genhtml_legend=1 00:04:23.487 --rc geninfo_all_blocks=1 00:04:23.487 --rc geninfo_unexecuted_blocks=1 00:04:23.487 00:04:23.487 ' 00:04:23.487 12:26:03 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.487 --rc genhtml_branch_coverage=1 00:04:23.487 --rc genhtml_function_coverage=1 00:04:23.487 --rc genhtml_legend=1 00:04:23.487 --rc geninfo_all_blocks=1 00:04:23.487 --rc geninfo_unexecuted_blocks=1 00:04:23.487 00:04:23.487 ' 00:04:23.487 12:26:03 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.487 --rc genhtml_branch_coverage=1 00:04:23.487 --rc genhtml_function_coverage=1 00:04:23.487 --rc genhtml_legend=1 00:04:23.487 --rc geninfo_all_blocks=1 00:04:23.487 --rc geninfo_unexecuted_blocks=1 00:04:23.487 00:04:23.487 ' 00:04:23.487 12:26:03 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:23.487 12:26:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.487 12:26:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.487 12:26:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.487 ************************************ 00:04:23.487 START TEST env_memory 00:04:23.487 ************************************ 00:04:23.487 12:26:03 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:23.487 00:04:23.487 00:04:23.487 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.487 http://cunit.sourceforge.net/ 00:04:23.487 00:04:23.487 00:04:23.487 Suite: memory 00:04:23.487 Test: alloc and free memory map ...[2024-11-15 12:26:03.620937] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:23.487 passed 00:04:23.487 Test: mem map translation ...[2024-11-15 12:26:03.640628] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:23.487 [2024-11-15 12:26:03.640649] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:23.487 [2024-11-15 12:26:03.640705] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:23.487 [2024-11-15 12:26:03.640722] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:23.487 passed 00:04:23.487 Test: mem map registration ...[2024-11-15 12:26:03.681673] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:23.487 [2024-11-15 12:26:03.681692] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:23.487 passed 00:04:23.487 Test: mem map adjacent registrations ...passed 00:04:23.487 00:04:23.487 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.487 suites 1 1 n/a 0 0 00:04:23.487 tests 4 4 4 0 0 00:04:23.487 asserts 152 152 152 0 n/a 00:04:23.487 00:04:23.487 Elapsed time = 0.141 seconds 00:04:23.487 00:04:23.487 real 0m0.150s 00:04:23.487 user 0m0.137s 00:04:23.487 sys 0m0.012s 00:04:23.487 12:26:03 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.487 12:26:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:23.487 ************************************ 00:04:23.487 END TEST env_memory 00:04:23.487 ************************************ 00:04:23.487 12:26:03 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:23.487 12:26:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.487 12:26:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.487 12:26:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.487 ************************************ 00:04:23.487 START TEST env_vtophys 00:04:23.487 ************************************ 00:04:23.487 12:26:03 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:23.487 EAL: lib.eal log level changed from notice to debug 00:04:23.487 EAL: Detected lcore 0 as core 0 on socket 0 00:04:23.487 EAL: Detected lcore 1 as core 1 on socket 0 00:04:23.487 EAL: Detected lcore 2 as core 2 on socket 0 00:04:23.487 EAL: Detected lcore 3 as core 3 on socket 0 00:04:23.487 EAL: Detected lcore 4 as core 4 on socket 0 00:04:23.487 EAL: Detected lcore 5 as core 5 on socket 0 00:04:23.487 EAL: Detected lcore 6 as core 8 on socket 0 00:04:23.487 EAL: Detected lcore 7 as core 9 on socket 0 00:04:23.487 EAL: Detected lcore 8 as core 10 on socket 0 00:04:23.487 EAL: Detected lcore 9 as core 11 on socket 0 00:04:23.487 EAL: Detected lcore 10 as core 12 on socket 0 00:04:23.487 EAL: Detected lcore 11 as core 13 on socket 0 00:04:23.487 EAL: Detected lcore 12 as core 0 on socket 1 00:04:23.487 EAL: Detected lcore 13 as core 1 on socket 1 00:04:23.487 EAL: Detected lcore 14 as core 2 on socket 1 00:04:23.487 EAL: Detected lcore 15 as core 3 on socket 1 00:04:23.487 EAL: Detected lcore 16 as core 4 on socket 1 00:04:23.487 EAL: Detected lcore 17 as core 5 on socket 1 00:04:23.487 EAL: Detected lcore 18 as core 8 on socket 1 00:04:23.488 EAL: Detected lcore 19 as core 9 on socket 1 00:04:23.488 EAL: Detected lcore 20 as core 10 on socket 1 00:04:23.488 EAL: Detected lcore 21 as core 11 on socket 1 00:04:23.488 EAL: Detected lcore 22 as core 12 on socket 1 00:04:23.488 EAL: Detected lcore 23 as core 13 on socket 1 00:04:23.488 EAL: Detected lcore 24 as core 0 on socket 0 00:04:23.488 EAL: Detected lcore 25 as core 1 on socket 0 00:04:23.488 EAL: Detected lcore 26 as core 2 on socket 0 00:04:23.488 EAL: Detected lcore 27 as core 3 on socket 0 00:04:23.488 EAL: Detected lcore 28 as core 4 on socket 0 00:04:23.488 EAL: Detected lcore 29 as core 5 on socket 0 00:04:23.488 EAL: Detected lcore 30 as core 8 on socket 0 00:04:23.488 EAL: Detected lcore 31 as core 9 on socket 0 00:04:23.488 EAL: Detected lcore 32 as core 10 on socket 0 00:04:23.488 EAL: Detected lcore 33 as core 11 on socket 0 00:04:23.488 EAL: Detected lcore 34 as core 12 on socket 0 00:04:23.488 EAL: Detected lcore 35 as core 13 on socket 0 00:04:23.488 EAL: Detected lcore 36 as core 0 on socket 1 00:04:23.488 EAL: Detected lcore 37 as core 1 on socket 1 00:04:23.488 EAL: Detected lcore 38 as core 2 on socket 1 00:04:23.488 EAL: Detected lcore 39 as core 3 on socket 1 00:04:23.488 EAL: Detected lcore 40 as core 4 on socket 1 00:04:23.488 EAL: Detected lcore 41 as core 5 on socket 1 00:04:23.488 EAL: Detected lcore 42 as core 8 on socket 1 00:04:23.488 EAL: Detected lcore 43 as core 9 on socket 1 00:04:23.488 EAL: Detected lcore 44 as core 10 on socket 1 00:04:23.488 EAL: Detected lcore 45 as core 11 on socket 1 00:04:23.488 EAL: Detected lcore 46 as core 12 on socket 1 00:04:23.488 EAL: Detected lcore 47 as core 13 on socket 1 00:04:23.488 EAL: Maximum logical cores by configuration: 128 00:04:23.488 EAL: Detected CPU lcores: 48 00:04:23.488 EAL: Detected NUMA nodes: 2 00:04:23.488 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:23.488 EAL: Detected shared linkage of DPDK 00:04:23.488 EAL: No shared files mode enabled, IPC will be disabled 00:04:23.747 EAL: Bus pci wants IOVA as 'DC' 00:04:23.747 EAL: Buses did not request a specific IOVA mode. 00:04:23.747 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:23.747 EAL: Selected IOVA mode 'VA' 00:04:23.747 EAL: Probing VFIO support... 00:04:23.747 EAL: IOMMU type 1 (Type 1) is supported 00:04:23.747 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:23.747 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:23.747 EAL: VFIO support initialized 00:04:23.747 EAL: Ask a virtual area of 0x2e000 bytes 00:04:23.747 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:23.747 EAL: Setting up physically contiguous memory... 00:04:23.747 EAL: Setting maximum number of open files to 524288 00:04:23.747 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:23.747 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:23.747 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:23.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.747 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:23.747 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.747 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:23.747 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:23.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.747 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:23.747 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.747 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:23.747 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:23.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.747 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:23.747 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.747 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:23.747 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:23.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.747 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:23.747 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.747 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:23.747 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:23.747 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:23.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.747 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:23.747 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:23.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.747 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:23.747 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:23.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.747 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:23.747 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:23.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.747 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:23.747 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:23.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.747 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:23.747 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:23.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.747 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:23.747 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:23.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.747 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:23.747 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:23.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.747 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:23.747 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:23.747 EAL: Hugepages will be freed exactly as allocated. 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: TSC frequency is ~2700000 KHz 00:04:23.748 EAL: Main lcore 0 is ready (tid=7feac4b51a00;cpuset=[0]) 00:04:23.748 EAL: Trying to obtain current memory policy. 00:04:23.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.748 EAL: Restoring previous memory policy: 0 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was expanded by 2MB 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:23.748 EAL: Mem event callback 'spdk:(nil)' registered 00:04:23.748 00:04:23.748 00:04:23.748 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.748 http://cunit.sourceforge.net/ 00:04:23.748 00:04:23.748 00:04:23.748 Suite: components_suite 00:04:23.748 Test: vtophys_malloc_test ...passed 00:04:23.748 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:23.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.748 EAL: Restoring previous memory policy: 4 00:04:23.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was expanded by 4MB 00:04:23.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was shrunk by 4MB 00:04:23.748 EAL: Trying to obtain current memory policy. 00:04:23.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.748 EAL: Restoring previous memory policy: 4 00:04:23.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was expanded by 6MB 00:04:23.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was shrunk by 6MB 00:04:23.748 EAL: Trying to obtain current memory policy. 00:04:23.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.748 EAL: Restoring previous memory policy: 4 00:04:23.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was expanded by 10MB 00:04:23.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was shrunk by 10MB 00:04:23.748 EAL: Trying to obtain current memory policy. 00:04:23.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.748 EAL: Restoring previous memory policy: 4 00:04:23.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was expanded by 18MB 00:04:23.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was shrunk by 18MB 00:04:23.748 EAL: Trying to obtain current memory policy. 00:04:23.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.748 EAL: Restoring previous memory policy: 4 00:04:23.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was expanded by 34MB 00:04:23.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was shrunk by 34MB 00:04:23.748 EAL: Trying to obtain current memory policy. 00:04:23.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.748 EAL: Restoring previous memory policy: 4 00:04:23.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was expanded by 66MB 00:04:23.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was shrunk by 66MB 00:04:23.748 EAL: Trying to obtain current memory policy. 00:04:23.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.748 EAL: Restoring previous memory policy: 4 00:04:23.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was expanded by 130MB 00:04:23.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was shrunk by 130MB 00:04:23.748 EAL: Trying to obtain current memory policy. 00:04:23.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.748 EAL: Restoring previous memory policy: 4 00:04:23.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.748 EAL: request: mp_malloc_sync 00:04:23.748 EAL: No shared files mode enabled, IPC is disabled 00:04:23.748 EAL: Heap on socket 0 was expanded by 258MB 00:04:24.006 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.006 EAL: request: mp_malloc_sync 00:04:24.006 EAL: No shared files mode enabled, IPC is disabled 00:04:24.006 EAL: Heap on socket 0 was shrunk by 258MB 00:04:24.006 EAL: Trying to obtain current memory policy. 00:04:24.006 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.006 EAL: Restoring previous memory policy: 4 00:04:24.006 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.006 EAL: request: mp_malloc_sync 00:04:24.006 EAL: No shared files mode enabled, IPC is disabled 00:04:24.006 EAL: Heap on socket 0 was expanded by 514MB 00:04:24.265 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.265 EAL: request: mp_malloc_sync 00:04:24.265 EAL: No shared files mode enabled, IPC is disabled 00:04:24.265 EAL: Heap on socket 0 was shrunk by 514MB 00:04:24.265 EAL: Trying to obtain current memory policy. 00:04:24.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.523 EAL: Restoring previous memory policy: 4 00:04:24.523 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.523 EAL: request: mp_malloc_sync 00:04:24.523 EAL: No shared files mode enabled, IPC is disabled 00:04:24.523 EAL: Heap on socket 0 was expanded by 1026MB 00:04:24.781 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.039 EAL: request: mp_malloc_sync 00:04:25.039 EAL: No shared files mode enabled, IPC is disabled 00:04:25.039 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:25.039 passed 00:04:25.039 00:04:25.039 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.039 suites 1 1 n/a 0 0 00:04:25.039 tests 2 2 2 0 0 00:04:25.039 asserts 497 497 497 0 n/a 00:04:25.039 00:04:25.039 Elapsed time = 1.323 seconds 00:04:25.039 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.039 EAL: request: mp_malloc_sync 00:04:25.039 EAL: No shared files mode enabled, IPC is disabled 00:04:25.039 EAL: Heap on socket 0 was shrunk by 2MB 00:04:25.039 EAL: No shared files mode enabled, IPC is disabled 00:04:25.039 EAL: No shared files mode enabled, IPC is disabled 00:04:25.039 EAL: No shared files mode enabled, IPC is disabled 00:04:25.039 00:04:25.039 real 0m1.458s 00:04:25.039 user 0m0.853s 00:04:25.039 sys 0m0.557s 00:04:25.039 12:26:05 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.039 12:26:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:25.039 ************************************ 00:04:25.039 END TEST env_vtophys 00:04:25.039 ************************************ 00:04:25.039 12:26:05 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:25.039 12:26:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.039 12:26:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.039 12:26:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.039 ************************************ 00:04:25.039 START TEST env_pci 00:04:25.039 ************************************ 00:04:25.039 12:26:05 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:25.039 00:04:25.039 00:04:25.039 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.039 http://cunit.sourceforge.net/ 00:04:25.039 00:04:25.039 00:04:25.039 Suite: pci 00:04:25.039 Test: pci_hook ...[2024-11-15 12:26:05.307909] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 893491 has claimed it 00:04:25.039 EAL: Cannot find device (10000:00:01.0) 00:04:25.039 EAL: Failed to attach device on primary process 00:04:25.039 passed 00:04:25.039 00:04:25.039 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.039 suites 1 1 n/a 0 0 00:04:25.039 tests 1 1 1 0 0 00:04:25.039 asserts 25 25 25 0 n/a 00:04:25.039 00:04:25.039 Elapsed time = 0.022 seconds 00:04:25.039 00:04:25.039 real 0m0.035s 00:04:25.039 user 0m0.008s 00:04:25.039 sys 0m0.027s 00:04:25.039 12:26:05 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.039 12:26:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:25.039 ************************************ 00:04:25.039 END TEST env_pci 00:04:25.039 ************************************ 00:04:25.039 12:26:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:25.039 12:26:05 env -- env/env.sh@15 -- # uname 00:04:25.039 12:26:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:25.040 12:26:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:25.040 12:26:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:25.040 12:26:05 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:25.040 12:26:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.040 12:26:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.299 ************************************ 00:04:25.299 START TEST env_dpdk_post_init 00:04:25.299 ************************************ 00:04:25.299 12:26:05 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:25.299 EAL: Detected CPU lcores: 48 00:04:25.299 EAL: Detected NUMA nodes: 2 00:04:25.299 EAL: Detected shared linkage of DPDK 00:04:25.299 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:25.299 EAL: Selected IOVA mode 'VA' 00:04:25.299 EAL: VFIO support initialized 00:04:25.299 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:25.299 EAL: Using IOMMU type 1 (Type 1) 00:04:25.299 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:25.299 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:25.299 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:25.299 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:25.299 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:25.299 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:25.299 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:25.299 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:25.299 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:25.299 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:25.299 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:25.299 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:25.560 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:25.560 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:25.560 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:25.560 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:26.128 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:29.409 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:29.409 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:29.667 Starting DPDK initialization... 00:04:29.667 Starting SPDK post initialization... 00:04:29.667 SPDK NVMe probe 00:04:29.667 Attaching to 0000:88:00.0 00:04:29.667 Attached to 0000:88:00.0 00:04:29.667 Cleaning up... 00:04:29.667 00:04:29.667 real 0m4.446s 00:04:29.667 user 0m3.076s 00:04:29.667 sys 0m0.429s 00:04:29.667 12:26:09 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.667 12:26:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.667 ************************************ 00:04:29.667 END TEST env_dpdk_post_init 00:04:29.667 ************************************ 00:04:29.667 12:26:09 env -- env/env.sh@26 -- # uname 00:04:29.667 12:26:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:29.667 12:26:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:29.667 12:26:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.667 12:26:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.667 12:26:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.667 ************************************ 00:04:29.667 START TEST env_mem_callbacks 00:04:29.667 ************************************ 00:04:29.667 12:26:09 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:29.667 EAL: Detected CPU lcores: 48 00:04:29.667 EAL: Detected NUMA nodes: 2 00:04:29.667 EAL: Detected shared linkage of DPDK 00:04:29.667 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.667 EAL: Selected IOVA mode 'VA' 00:04:29.667 EAL: VFIO support initialized 00:04:29.667 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.667 00:04:29.667 00:04:29.667 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.667 http://cunit.sourceforge.net/ 00:04:29.667 00:04:29.667 00:04:29.667 Suite: memory 00:04:29.667 Test: test ... 00:04:29.667 register 0x200000200000 2097152 00:04:29.667 malloc 3145728 00:04:29.667 register 0x200000400000 4194304 00:04:29.667 buf 0x200000500000 len 3145728 PASSED 00:04:29.667 malloc 64 00:04:29.667 buf 0x2000004fff40 len 64 PASSED 00:04:29.667 malloc 4194304 00:04:29.667 register 0x200000800000 6291456 00:04:29.667 buf 0x200000a00000 len 4194304 PASSED 00:04:29.667 free 0x200000500000 3145728 00:04:29.667 free 0x2000004fff40 64 00:04:29.667 unregister 0x200000400000 4194304 PASSED 00:04:29.667 free 0x200000a00000 4194304 00:04:29.667 unregister 0x200000800000 6291456 PASSED 00:04:29.667 malloc 8388608 00:04:29.667 register 0x200000400000 10485760 00:04:29.667 buf 0x200000600000 len 8388608 PASSED 00:04:29.667 free 0x200000600000 8388608 00:04:29.667 unregister 0x200000400000 10485760 PASSED 00:04:29.667 passed 00:04:29.667 00:04:29.667 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.667 suites 1 1 n/a 0 0 00:04:29.668 tests 1 1 1 0 0 00:04:29.668 asserts 15 15 15 0 n/a 00:04:29.668 00:04:29.668 Elapsed time = 0.005 seconds 00:04:29.668 00:04:29.668 real 0m0.045s 00:04:29.668 user 0m0.014s 00:04:29.668 sys 0m0.030s 00:04:29.668 12:26:09 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.668 12:26:09 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:29.668 ************************************ 00:04:29.668 END TEST env_mem_callbacks 00:04:29.668 ************************************ 00:04:29.668 00:04:29.668 real 0m6.538s 00:04:29.668 user 0m4.283s 00:04:29.668 sys 0m1.287s 00:04:29.668 12:26:09 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.668 12:26:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.668 ************************************ 00:04:29.668 END TEST env 00:04:29.668 ************************************ 00:04:29.668 12:26:09 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:29.668 12:26:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.668 12:26:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.668 12:26:09 -- common/autotest_common.sh@10 -- # set +x 00:04:29.668 ************************************ 00:04:29.668 START TEST rpc 00:04:29.668 ************************************ 00:04:29.668 12:26:09 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:29.926 * Looking for test storage... 00:04:29.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:29.926 12:26:10 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.926 12:26:10 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.926 12:26:10 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.926 12:26:10 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.926 12:26:10 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.926 12:26:10 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.926 12:26:10 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.926 12:26:10 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.926 12:26:10 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.926 12:26:10 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.926 12:26:10 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.926 12:26:10 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.926 12:26:10 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.926 12:26:10 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.926 12:26:10 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.926 12:26:10 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:29.926 12:26:10 rpc -- scripts/common.sh@345 -- # : 1 00:04:29.926 12:26:10 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.926 12:26:10 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.926 12:26:10 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:29.926 12:26:10 rpc -- scripts/common.sh@353 -- # local d=1 00:04:29.926 12:26:10 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.926 12:26:10 rpc -- scripts/common.sh@355 -- # echo 1 00:04:29.926 12:26:10 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.926 12:26:10 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.926 12:26:10 rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.926 12:26:10 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.926 12:26:10 rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.926 12:26:10 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.926 12:26:10 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.926 12:26:10 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.926 12:26:10 rpc -- scripts/common.sh@368 -- # return 0 00:04:29.926 12:26:10 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.926 12:26:10 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.926 --rc genhtml_branch_coverage=1 00:04:29.926 --rc genhtml_function_coverage=1 00:04:29.926 --rc genhtml_legend=1 00:04:29.926 --rc geninfo_all_blocks=1 00:04:29.926 --rc geninfo_unexecuted_blocks=1 00:04:29.926 00:04:29.926 ' 00:04:29.926 12:26:10 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.926 --rc genhtml_branch_coverage=1 00:04:29.926 --rc genhtml_function_coverage=1 00:04:29.926 --rc genhtml_legend=1 00:04:29.926 --rc geninfo_all_blocks=1 00:04:29.926 --rc geninfo_unexecuted_blocks=1 00:04:29.926 00:04:29.926 ' 00:04:29.926 12:26:10 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.926 --rc genhtml_branch_coverage=1 00:04:29.926 --rc genhtml_function_coverage=1 00:04:29.926 --rc genhtml_legend=1 00:04:29.926 --rc geninfo_all_blocks=1 00:04:29.926 --rc geninfo_unexecuted_blocks=1 00:04:29.926 00:04:29.926 ' 00:04:29.926 12:26:10 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.926 --rc genhtml_branch_coverage=1 00:04:29.926 --rc genhtml_function_coverage=1 00:04:29.926 --rc genhtml_legend=1 00:04:29.926 --rc geninfo_all_blocks=1 00:04:29.926 --rc geninfo_unexecuted_blocks=1 00:04:29.926 00:04:29.926 ' 00:04:29.926 12:26:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=894270 00:04:29.926 12:26:10 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:29.926 12:26:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.926 12:26:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 894270 00:04:29.926 12:26:10 rpc -- common/autotest_common.sh@835 -- # '[' -z 894270 ']' 00:04:29.926 12:26:10 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.926 12:26:10 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.926 12:26:10 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.926 12:26:10 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.926 12:26:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.926 [2024-11-15 12:26:10.192867] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:04:29.926 [2024-11-15 12:26:10.192967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894270 ] 00:04:29.926 [2024-11-15 12:26:10.262138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.184 [2024-11-15 12:26:10.323343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:30.184 [2024-11-15 12:26:10.323415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 894270' to capture a snapshot of events at runtime. 00:04:30.184 [2024-11-15 12:26:10.323430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:30.184 [2024-11-15 12:26:10.323441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:30.184 [2024-11-15 12:26:10.323451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid894270 for offline analysis/debug. 00:04:30.184 [2024-11-15 12:26:10.324135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.442 12:26:10 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.442 12:26:10 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.442 12:26:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:30.442 12:26:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:30.442 12:26:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:30.442 12:26:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:30.442 12:26:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.442 12:26:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.442 12:26:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.442 ************************************ 00:04:30.442 START TEST rpc_integrity 00:04:30.442 ************************************ 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:30.442 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.442 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:30.442 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:30.442 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:30.442 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.442 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:30.442 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.442 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:30.442 { 00:04:30.442 "name": "Malloc0", 00:04:30.442 "aliases": [ 00:04:30.442 "f348f0af-ffcb-4cc7-b1e8-85a04a033dbc" 00:04:30.442 ], 00:04:30.442 "product_name": "Malloc disk", 00:04:30.442 "block_size": 512, 00:04:30.442 "num_blocks": 16384, 00:04:30.442 "uuid": "f348f0af-ffcb-4cc7-b1e8-85a04a033dbc", 00:04:30.442 "assigned_rate_limits": { 00:04:30.442 "rw_ios_per_sec": 0, 00:04:30.442 "rw_mbytes_per_sec": 0, 00:04:30.442 "r_mbytes_per_sec": 0, 00:04:30.442 "w_mbytes_per_sec": 0 00:04:30.442 }, 00:04:30.442 "claimed": false, 00:04:30.442 "zoned": false, 00:04:30.442 "supported_io_types": { 00:04:30.442 "read": true, 00:04:30.442 "write": true, 00:04:30.442 "unmap": true, 00:04:30.442 "flush": true, 00:04:30.442 "reset": true, 00:04:30.442 "nvme_admin": false, 00:04:30.442 "nvme_io": false, 00:04:30.442 "nvme_io_md": false, 00:04:30.442 "write_zeroes": true, 00:04:30.442 "zcopy": true, 00:04:30.442 "get_zone_info": false, 00:04:30.442 "zone_management": false, 00:04:30.442 "zone_append": false, 00:04:30.442 "compare": false, 00:04:30.442 "compare_and_write": false, 00:04:30.442 "abort": true, 00:04:30.442 "seek_hole": false, 00:04:30.442 "seek_data": false, 00:04:30.442 "copy": true, 00:04:30.442 "nvme_iov_md": false 00:04:30.442 }, 00:04:30.442 "memory_domains": [ 00:04:30.442 { 00:04:30.442 "dma_device_id": "system", 00:04:30.442 "dma_device_type": 1 00:04:30.442 }, 00:04:30.442 { 00:04:30.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.442 "dma_device_type": 2 00:04:30.442 } 00:04:30.442 ], 00:04:30.442 "driver_specific": {} 00:04:30.442 } 00:04:30.442 ]' 00:04:30.442 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:30.442 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:30.442 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.442 [2024-11-15 12:26:10.727424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:30.442 [2024-11-15 12:26:10.727464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:30.442 [2024-11-15 12:26:10.727486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc31740 00:04:30.442 [2024-11-15 12:26:10.727498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:30.442 [2024-11-15 12:26:10.728863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:30.442 [2024-11-15 12:26:10.728890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:30.442 Passthru0 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.442 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.442 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.442 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:30.442 { 00:04:30.442 "name": "Malloc0", 00:04:30.442 "aliases": [ 00:04:30.442 "f348f0af-ffcb-4cc7-b1e8-85a04a033dbc" 00:04:30.442 ], 00:04:30.442 "product_name": "Malloc disk", 00:04:30.442 "block_size": 512, 00:04:30.442 "num_blocks": 16384, 00:04:30.442 "uuid": "f348f0af-ffcb-4cc7-b1e8-85a04a033dbc", 00:04:30.442 "assigned_rate_limits": { 00:04:30.442 "rw_ios_per_sec": 0, 00:04:30.442 "rw_mbytes_per_sec": 0, 00:04:30.442 "r_mbytes_per_sec": 0, 00:04:30.442 "w_mbytes_per_sec": 0 00:04:30.442 }, 00:04:30.442 "claimed": true, 00:04:30.442 "claim_type": "exclusive_write", 00:04:30.442 "zoned": false, 00:04:30.442 "supported_io_types": { 00:04:30.442 "read": true, 00:04:30.442 "write": true, 00:04:30.442 "unmap": true, 00:04:30.442 "flush": true, 00:04:30.442 "reset": true, 00:04:30.442 "nvme_admin": false, 00:04:30.442 "nvme_io": false, 00:04:30.442 "nvme_io_md": false, 00:04:30.442 "write_zeroes": true, 00:04:30.442 "zcopy": true, 00:04:30.442 "get_zone_info": false, 00:04:30.442 "zone_management": false, 00:04:30.442 "zone_append": false, 00:04:30.442 "compare": false, 00:04:30.442 "compare_and_write": false, 00:04:30.442 "abort": true, 00:04:30.442 "seek_hole": false, 00:04:30.442 "seek_data": false, 00:04:30.442 "copy": true, 00:04:30.442 "nvme_iov_md": false 00:04:30.442 }, 00:04:30.442 "memory_domains": [ 00:04:30.442 { 00:04:30.442 "dma_device_id": "system", 00:04:30.442 "dma_device_type": 1 00:04:30.442 }, 00:04:30.442 { 00:04:30.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.442 "dma_device_type": 2 00:04:30.442 } 00:04:30.442 ], 00:04:30.442 "driver_specific": {} 00:04:30.442 }, 00:04:30.442 { 00:04:30.442 "name": "Passthru0", 00:04:30.442 "aliases": [ 00:04:30.442 "cbf0ffdb-6dfd-5e66-8cb3-25630fedcd85" 00:04:30.442 ], 00:04:30.442 "product_name": "passthru", 00:04:30.442 "block_size": 512, 00:04:30.442 "num_blocks": 16384, 00:04:30.442 "uuid": "cbf0ffdb-6dfd-5e66-8cb3-25630fedcd85", 00:04:30.442 "assigned_rate_limits": { 00:04:30.442 "rw_ios_per_sec": 0, 00:04:30.442 "rw_mbytes_per_sec": 0, 00:04:30.442 "r_mbytes_per_sec": 0, 00:04:30.442 "w_mbytes_per_sec": 0 00:04:30.442 }, 00:04:30.442 "claimed": false, 00:04:30.442 "zoned": false, 00:04:30.442 "supported_io_types": { 00:04:30.442 "read": true, 00:04:30.442 "write": true, 00:04:30.442 "unmap": true, 00:04:30.442 "flush": true, 00:04:30.442 "reset": true, 00:04:30.442 "nvme_admin": false, 00:04:30.442 "nvme_io": false, 00:04:30.442 "nvme_io_md": false, 00:04:30.442 "write_zeroes": true, 00:04:30.442 "zcopy": true, 00:04:30.442 "get_zone_info": false, 00:04:30.442 "zone_management": false, 00:04:30.442 "zone_append": false, 00:04:30.443 "compare": false, 00:04:30.443 "compare_and_write": false, 00:04:30.443 "abort": true, 00:04:30.443 "seek_hole": false, 00:04:30.443 "seek_data": false, 00:04:30.443 "copy": true, 00:04:30.443 "nvme_iov_md": false 00:04:30.443 }, 00:04:30.443 "memory_domains": [ 00:04:30.443 { 00:04:30.443 "dma_device_id": "system", 00:04:30.443 "dma_device_type": 1 00:04:30.443 }, 00:04:30.443 { 00:04:30.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.443 "dma_device_type": 2 00:04:30.443 } 00:04:30.443 ], 00:04:30.443 "driver_specific": { 00:04:30.443 "passthru": { 00:04:30.443 "name": "Passthru0", 00:04:30.443 "base_bdev_name": "Malloc0" 00:04:30.443 } 00:04:30.443 } 00:04:30.443 } 00:04:30.443 ]' 00:04:30.443 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:30.701 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:30.701 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:30.701 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.701 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.701 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.701 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:30.701 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.701 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.701 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.701 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:30.701 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.701 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.701 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.701 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:30.701 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:30.701 12:26:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:30.701 00:04:30.701 real 0m0.223s 00:04:30.701 user 0m0.150s 00:04:30.701 sys 0m0.019s 00:04:30.701 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.701 12:26:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.701 ************************************ 00:04:30.701 END TEST rpc_integrity 00:04:30.701 ************************************ 00:04:30.701 12:26:10 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:30.701 12:26:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.701 12:26:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.701 12:26:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.701 ************************************ 00:04:30.701 START TEST rpc_plugins 00:04:30.701 ************************************ 00:04:30.701 12:26:10 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:30.701 12:26:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:30.701 12:26:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.701 12:26:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.701 12:26:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.701 12:26:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:30.701 12:26:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:30.701 12:26:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.701 12:26:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.701 12:26:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.701 12:26:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:30.701 { 00:04:30.701 "name": "Malloc1", 00:04:30.701 "aliases": [ 00:04:30.701 "d3a8474d-fbf0-4024-ad20-81ed6302721e" 00:04:30.701 ], 00:04:30.701 "product_name": "Malloc disk", 00:04:30.701 "block_size": 4096, 00:04:30.701 "num_blocks": 256, 00:04:30.701 "uuid": "d3a8474d-fbf0-4024-ad20-81ed6302721e", 00:04:30.701 "assigned_rate_limits": { 00:04:30.701 "rw_ios_per_sec": 0, 00:04:30.701 "rw_mbytes_per_sec": 0, 00:04:30.701 "r_mbytes_per_sec": 0, 00:04:30.701 "w_mbytes_per_sec": 0 00:04:30.701 }, 00:04:30.701 "claimed": false, 00:04:30.701 "zoned": false, 00:04:30.701 "supported_io_types": { 00:04:30.701 "read": true, 00:04:30.701 "write": true, 00:04:30.701 "unmap": true, 00:04:30.701 "flush": true, 00:04:30.701 "reset": true, 00:04:30.701 "nvme_admin": false, 00:04:30.701 "nvme_io": false, 00:04:30.701 "nvme_io_md": false, 00:04:30.701 "write_zeroes": true, 00:04:30.701 "zcopy": true, 00:04:30.701 "get_zone_info": false, 00:04:30.701 "zone_management": false, 00:04:30.701 "zone_append": false, 00:04:30.701 "compare": false, 00:04:30.701 "compare_and_write": false, 00:04:30.701 "abort": true, 00:04:30.701 "seek_hole": false, 00:04:30.701 "seek_data": false, 00:04:30.701 "copy": true, 00:04:30.701 "nvme_iov_md": false 00:04:30.701 }, 00:04:30.701 "memory_domains": [ 00:04:30.701 { 00:04:30.701 "dma_device_id": "system", 00:04:30.701 "dma_device_type": 1 00:04:30.701 }, 00:04:30.701 { 00:04:30.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.701 "dma_device_type": 2 00:04:30.701 } 00:04:30.701 ], 00:04:30.701 "driver_specific": {} 00:04:30.701 } 00:04:30.701 ]' 00:04:30.701 12:26:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:30.701 12:26:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:30.701 12:26:10 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:30.701 12:26:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.701 12:26:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.701 12:26:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.701 12:26:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:30.701 12:26:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.701 12:26:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.701 12:26:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.701 12:26:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:30.701 12:26:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:30.701 12:26:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:30.701 00:04:30.701 real 0m0.106s 00:04:30.701 user 0m0.066s 00:04:30.701 sys 0m0.011s 00:04:30.701 12:26:10 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.701 12:26:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.701 ************************************ 00:04:30.701 END TEST rpc_plugins 00:04:30.701 ************************************ 00:04:30.701 12:26:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:30.701 12:26:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.701 12:26:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.701 12:26:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.701 ************************************ 00:04:30.701 START TEST rpc_trace_cmd_test 00:04:30.701 ************************************ 00:04:30.701 12:26:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:30.701 12:26:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:30.701 12:26:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:30.701 12:26:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.701 12:26:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:30.959 12:26:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.959 12:26:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:30.959 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid894270", 00:04:30.959 "tpoint_group_mask": "0x8", 00:04:30.959 "iscsi_conn": { 00:04:30.959 "mask": "0x2", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 }, 00:04:30.959 "scsi": { 00:04:30.959 "mask": "0x4", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 }, 00:04:30.959 "bdev": { 00:04:30.959 "mask": "0x8", 00:04:30.959 "tpoint_mask": "0xffffffffffffffff" 00:04:30.959 }, 00:04:30.959 "nvmf_rdma": { 00:04:30.959 "mask": "0x10", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 }, 00:04:30.959 "nvmf_tcp": { 00:04:30.959 "mask": "0x20", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 }, 00:04:30.959 "ftl": { 00:04:30.959 "mask": "0x40", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 }, 00:04:30.959 "blobfs": { 00:04:30.959 "mask": "0x80", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 }, 00:04:30.959 "dsa": { 00:04:30.959 "mask": "0x200", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 }, 00:04:30.959 "thread": { 00:04:30.959 "mask": "0x400", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 }, 00:04:30.959 "nvme_pcie": { 00:04:30.959 "mask": "0x800", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 }, 00:04:30.959 "iaa": { 00:04:30.959 "mask": "0x1000", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 }, 00:04:30.959 "nvme_tcp": { 00:04:30.959 "mask": "0x2000", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 }, 00:04:30.959 "bdev_nvme": { 00:04:30.959 "mask": "0x4000", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 }, 00:04:30.959 "sock": { 00:04:30.959 "mask": "0x8000", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 }, 00:04:30.959 "blob": { 00:04:30.959 "mask": "0x10000", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 }, 00:04:30.959 "bdev_raid": { 00:04:30.959 "mask": "0x20000", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 }, 00:04:30.959 "scheduler": { 00:04:30.959 "mask": "0x40000", 00:04:30.959 "tpoint_mask": "0x0" 00:04:30.959 } 00:04:30.959 }' 00:04:30.959 12:26:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:30.959 12:26:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:30.959 12:26:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:30.959 12:26:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:30.959 12:26:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:30.959 12:26:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:30.959 12:26:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:30.959 12:26:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:30.959 12:26:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:30.959 12:26:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:30.959 00:04:30.959 real 0m0.180s 00:04:30.959 user 0m0.161s 00:04:30.959 sys 0m0.013s 00:04:30.959 12:26:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.959 12:26:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:30.959 ************************************ 00:04:30.959 END TEST rpc_trace_cmd_test 00:04:30.959 ************************************ 00:04:30.959 12:26:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:30.959 12:26:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:30.959 12:26:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:30.959 12:26:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.959 12:26:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.959 12:26:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.959 ************************************ 00:04:30.959 START TEST rpc_daemon_integrity 00:04:30.959 ************************************ 00:04:30.960 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:30.960 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:30.960 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.960 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.960 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.960 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:30.960 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:31.218 { 00:04:31.218 "name": "Malloc2", 00:04:31.218 "aliases": [ 00:04:31.218 "6970d0cd-52d9-4b5d-b88c-0ae2f5ac8bdc" 00:04:31.218 ], 00:04:31.218 "product_name": "Malloc disk", 00:04:31.218 "block_size": 512, 00:04:31.218 "num_blocks": 16384, 00:04:31.218 "uuid": "6970d0cd-52d9-4b5d-b88c-0ae2f5ac8bdc", 00:04:31.218 "assigned_rate_limits": { 00:04:31.218 "rw_ios_per_sec": 0, 00:04:31.218 "rw_mbytes_per_sec": 0, 00:04:31.218 "r_mbytes_per_sec": 0, 00:04:31.218 "w_mbytes_per_sec": 0 00:04:31.218 }, 00:04:31.218 "claimed": false, 00:04:31.218 "zoned": false, 00:04:31.218 "supported_io_types": { 00:04:31.218 "read": true, 00:04:31.218 "write": true, 00:04:31.218 "unmap": true, 00:04:31.218 "flush": true, 00:04:31.218 "reset": true, 00:04:31.218 "nvme_admin": false, 00:04:31.218 "nvme_io": false, 00:04:31.218 "nvme_io_md": false, 00:04:31.218 "write_zeroes": true, 00:04:31.218 "zcopy": true, 00:04:31.218 "get_zone_info": false, 00:04:31.218 "zone_management": false, 00:04:31.218 "zone_append": false, 00:04:31.218 "compare": false, 00:04:31.218 "compare_and_write": false, 00:04:31.218 "abort": true, 00:04:31.218 "seek_hole": false, 00:04:31.218 "seek_data": false, 00:04:31.218 "copy": true, 00:04:31.218 "nvme_iov_md": false 00:04:31.218 }, 00:04:31.218 "memory_domains": [ 00:04:31.218 { 00:04:31.218 "dma_device_id": "system", 00:04:31.218 "dma_device_type": 1 00:04:31.218 }, 00:04:31.218 { 00:04:31.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.218 "dma_device_type": 2 00:04:31.218 } 00:04:31.218 ], 00:04:31.218 "driver_specific": {} 00:04:31.218 } 00:04:31.218 ]' 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.218 [2024-11-15 12:26:11.361770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:31.218 [2024-11-15 12:26:11.361816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:31.218 [2024-11-15 12:26:11.361846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc31d20 00:04:31.218 [2024-11-15 12:26:11.361861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:31.218 [2024-11-15 12:26:11.363046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:31.218 [2024-11-15 12:26:11.363084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:31.218 Passthru0 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:31.218 { 00:04:31.218 "name": "Malloc2", 00:04:31.218 "aliases": [ 00:04:31.218 "6970d0cd-52d9-4b5d-b88c-0ae2f5ac8bdc" 00:04:31.218 ], 00:04:31.218 "product_name": "Malloc disk", 00:04:31.218 "block_size": 512, 00:04:31.218 "num_blocks": 16384, 00:04:31.218 "uuid": "6970d0cd-52d9-4b5d-b88c-0ae2f5ac8bdc", 00:04:31.218 "assigned_rate_limits": { 00:04:31.218 "rw_ios_per_sec": 0, 00:04:31.218 "rw_mbytes_per_sec": 0, 00:04:31.218 "r_mbytes_per_sec": 0, 00:04:31.218 "w_mbytes_per_sec": 0 00:04:31.218 }, 00:04:31.218 "claimed": true, 00:04:31.218 "claim_type": "exclusive_write", 00:04:31.218 "zoned": false, 00:04:31.218 "supported_io_types": { 00:04:31.218 "read": true, 00:04:31.218 "write": true, 00:04:31.218 "unmap": true, 00:04:31.218 "flush": true, 00:04:31.218 "reset": true, 00:04:31.218 "nvme_admin": false, 00:04:31.218 "nvme_io": false, 00:04:31.218 "nvme_io_md": false, 00:04:31.218 "write_zeroes": true, 00:04:31.218 "zcopy": true, 00:04:31.218 "get_zone_info": false, 00:04:31.218 "zone_management": false, 00:04:31.218 "zone_append": false, 00:04:31.218 "compare": false, 00:04:31.218 "compare_and_write": false, 00:04:31.218 "abort": true, 00:04:31.218 "seek_hole": false, 00:04:31.218 "seek_data": false, 00:04:31.218 "copy": true, 00:04:31.218 "nvme_iov_md": false 00:04:31.218 }, 00:04:31.218 "memory_domains": [ 00:04:31.218 { 00:04:31.218 "dma_device_id": "system", 00:04:31.218 "dma_device_type": 1 00:04:31.218 }, 00:04:31.218 { 00:04:31.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.218 "dma_device_type": 2 00:04:31.218 } 00:04:31.218 ], 00:04:31.218 "driver_specific": {} 00:04:31.218 }, 00:04:31.218 { 00:04:31.218 "name": "Passthru0", 00:04:31.218 "aliases": [ 00:04:31.218 "eaf7436e-2105-5790-9be4-2eff0f654a5f" 00:04:31.218 ], 00:04:31.218 "product_name": "passthru", 00:04:31.218 "block_size": 512, 00:04:31.218 "num_blocks": 16384, 00:04:31.218 "uuid": "eaf7436e-2105-5790-9be4-2eff0f654a5f", 00:04:31.218 "assigned_rate_limits": { 00:04:31.218 "rw_ios_per_sec": 0, 00:04:31.218 "rw_mbytes_per_sec": 0, 00:04:31.218 "r_mbytes_per_sec": 0, 00:04:31.218 "w_mbytes_per_sec": 0 00:04:31.218 }, 00:04:31.218 "claimed": false, 00:04:31.218 "zoned": false, 00:04:31.218 "supported_io_types": { 00:04:31.218 "read": true, 00:04:31.218 "write": true, 00:04:31.218 "unmap": true, 00:04:31.218 "flush": true, 00:04:31.218 "reset": true, 00:04:31.218 "nvme_admin": false, 00:04:31.218 "nvme_io": false, 00:04:31.218 "nvme_io_md": false, 00:04:31.218 "write_zeroes": true, 00:04:31.218 "zcopy": true, 00:04:31.218 "get_zone_info": false, 00:04:31.218 "zone_management": false, 00:04:31.218 "zone_append": false, 00:04:31.218 "compare": false, 00:04:31.218 "compare_and_write": false, 00:04:31.218 "abort": true, 00:04:31.218 "seek_hole": false, 00:04:31.218 "seek_data": false, 00:04:31.218 "copy": true, 00:04:31.218 "nvme_iov_md": false 00:04:31.218 }, 00:04:31.218 "memory_domains": [ 00:04:31.218 { 00:04:31.218 "dma_device_id": "system", 00:04:31.218 "dma_device_type": 1 00:04:31.218 }, 00:04:31.218 { 00:04:31.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.218 "dma_device_type": 2 00:04:31.218 } 00:04:31.218 ], 00:04:31.218 "driver_specific": { 00:04:31.218 "passthru": { 00:04:31.218 "name": "Passthru0", 00:04:31.218 "base_bdev_name": "Malloc2" 00:04:31.218 } 00:04:31.218 } 00:04:31.218 } 00:04:31.218 ]' 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.218 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.219 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:31.219 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.219 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.219 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.219 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:31.219 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:31.219 12:26:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:31.219 00:04:31.219 real 0m0.213s 00:04:31.219 user 0m0.136s 00:04:31.219 sys 0m0.021s 00:04:31.219 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.219 12:26:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.219 ************************************ 00:04:31.219 END TEST rpc_daemon_integrity 00:04:31.219 ************************************ 00:04:31.219 12:26:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:31.219 12:26:11 rpc -- rpc/rpc.sh@84 -- # killprocess 894270 00:04:31.219 12:26:11 rpc -- common/autotest_common.sh@954 -- # '[' -z 894270 ']' 00:04:31.219 12:26:11 rpc -- common/autotest_common.sh@958 -- # kill -0 894270 00:04:31.219 12:26:11 rpc -- common/autotest_common.sh@959 -- # uname 00:04:31.219 12:26:11 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.219 12:26:11 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 894270 00:04:31.219 12:26:11 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.219 12:26:11 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.219 12:26:11 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 894270' 00:04:31.219 killing process with pid 894270 00:04:31.219 12:26:11 rpc -- common/autotest_common.sh@973 -- # kill 894270 00:04:31.219 12:26:11 rpc -- common/autotest_common.sh@978 -- # wait 894270 00:04:31.784 00:04:31.784 real 0m1.942s 00:04:31.784 user 0m2.433s 00:04:31.784 sys 0m0.586s 00:04:31.784 12:26:11 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.784 12:26:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.784 ************************************ 00:04:31.784 END TEST rpc 00:04:31.784 ************************************ 00:04:31.784 12:26:11 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:31.784 12:26:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.784 12:26:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.784 12:26:11 -- common/autotest_common.sh@10 -- # set +x 00:04:31.784 ************************************ 00:04:31.784 START TEST skip_rpc 00:04:31.784 ************************************ 00:04:31.784 12:26:11 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:31.784 * Looking for test storage... 00:04:31.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:31.784 12:26:12 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.784 12:26:12 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.784 12:26:12 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.043 12:26:12 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.043 12:26:12 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:32.043 12:26:12 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.043 12:26:12 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.043 --rc genhtml_branch_coverage=1 00:04:32.043 --rc genhtml_function_coverage=1 00:04:32.043 --rc genhtml_legend=1 00:04:32.043 --rc geninfo_all_blocks=1 00:04:32.043 --rc geninfo_unexecuted_blocks=1 00:04:32.043 00:04:32.043 ' 00:04:32.043 12:26:12 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.043 --rc genhtml_branch_coverage=1 00:04:32.043 --rc genhtml_function_coverage=1 00:04:32.043 --rc genhtml_legend=1 00:04:32.043 --rc geninfo_all_blocks=1 00:04:32.043 --rc geninfo_unexecuted_blocks=1 00:04:32.043 00:04:32.043 ' 00:04:32.043 12:26:12 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.043 --rc genhtml_branch_coverage=1 00:04:32.043 --rc genhtml_function_coverage=1 00:04:32.043 --rc genhtml_legend=1 00:04:32.043 --rc geninfo_all_blocks=1 00:04:32.043 --rc geninfo_unexecuted_blocks=1 00:04:32.043 00:04:32.043 ' 00:04:32.043 12:26:12 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.043 --rc genhtml_branch_coverage=1 00:04:32.043 --rc genhtml_function_coverage=1 00:04:32.043 --rc genhtml_legend=1 00:04:32.043 --rc geninfo_all_blocks=1 00:04:32.043 --rc geninfo_unexecuted_blocks=1 00:04:32.043 00:04:32.043 ' 00:04:32.043 12:26:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:32.043 12:26:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:32.043 12:26:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:32.043 12:26:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.043 12:26:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.043 12:26:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.043 ************************************ 00:04:32.043 START TEST skip_rpc 00:04:32.043 ************************************ 00:04:32.043 12:26:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:32.043 12:26:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=894613 00:04:32.043 12:26:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:32.043 12:26:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.043 12:26:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:32.043 [2024-11-15 12:26:12.223898] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:04:32.043 [2024-11-15 12:26:12.223974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894613 ] 00:04:32.043 [2024-11-15 12:26:12.289621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.043 [2024-11-15 12:26:12.347580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 894613 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 894613 ']' 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 894613 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 894613 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 894613' 00:04:37.301 killing process with pid 894613 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 894613 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 894613 00:04:37.301 00:04:37.301 real 0m5.462s 00:04:37.301 user 0m5.161s 00:04:37.301 sys 0m0.308s 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.301 12:26:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.301 ************************************ 00:04:37.301 END TEST skip_rpc 00:04:37.301 ************************************ 00:04:37.559 12:26:17 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:37.559 12:26:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.559 12:26:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.559 12:26:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.559 ************************************ 00:04:37.559 START TEST skip_rpc_with_json 00:04:37.559 ************************************ 00:04:37.559 12:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:37.559 12:26:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:37.559 12:26:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=895291 00:04:37.559 12:26:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.559 12:26:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.559 12:26:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 895291 00:04:37.559 12:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 895291 ']' 00:04:37.559 12:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.559 12:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.559 12:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.559 12:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.559 12:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.559 [2024-11-15 12:26:17.738428] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:04:37.559 [2024-11-15 12:26:17.738538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid895291 ] 00:04:37.559 [2024-11-15 12:26:17.802385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.559 [2024-11-15 12:26:17.859331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.817 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.817 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:37.817 12:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:37.817 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.817 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.817 [2024-11-15 12:26:18.130114] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:37.817 request: 00:04:37.817 { 00:04:37.818 "trtype": "tcp", 00:04:37.818 "method": "nvmf_get_transports", 00:04:37.818 "req_id": 1 00:04:37.818 } 00:04:37.818 Got JSON-RPC error response 00:04:37.818 response: 00:04:37.818 { 00:04:37.818 "code": -19, 00:04:37.818 "message": "No such device" 00:04:37.818 } 00:04:37.818 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:37.818 12:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:37.818 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.818 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.818 [2024-11-15 12:26:18.138213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:37.818 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.818 12:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:37.818 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.818 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.076 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.076 12:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:38.076 { 00:04:38.076 "subsystems": [ 00:04:38.076 { 00:04:38.076 "subsystem": "fsdev", 00:04:38.076 "config": [ 00:04:38.076 { 00:04:38.076 "method": "fsdev_set_opts", 00:04:38.076 "params": { 00:04:38.076 "fsdev_io_pool_size": 65535, 00:04:38.076 "fsdev_io_cache_size": 256 00:04:38.076 } 00:04:38.076 } 00:04:38.076 ] 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "subsystem": "vfio_user_target", 00:04:38.076 "config": null 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "subsystem": "keyring", 00:04:38.076 "config": [] 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "subsystem": "iobuf", 00:04:38.076 "config": [ 00:04:38.076 { 00:04:38.076 "method": "iobuf_set_options", 00:04:38.076 "params": { 00:04:38.076 "small_pool_count": 8192, 00:04:38.076 "large_pool_count": 1024, 00:04:38.076 "small_bufsize": 8192, 00:04:38.076 "large_bufsize": 135168, 00:04:38.076 "enable_numa": false 00:04:38.076 } 00:04:38.076 } 00:04:38.076 ] 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "subsystem": "sock", 00:04:38.076 "config": [ 00:04:38.076 { 00:04:38.076 "method": "sock_set_default_impl", 00:04:38.076 "params": { 00:04:38.076 "impl_name": "posix" 00:04:38.076 } 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "method": "sock_impl_set_options", 00:04:38.076 "params": { 00:04:38.076 "impl_name": "ssl", 00:04:38.076 "recv_buf_size": 4096, 00:04:38.076 "send_buf_size": 4096, 00:04:38.076 "enable_recv_pipe": true, 00:04:38.076 "enable_quickack": false, 00:04:38.076 "enable_placement_id": 0, 00:04:38.076 "enable_zerocopy_send_server": true, 00:04:38.076 "enable_zerocopy_send_client": false, 00:04:38.076 "zerocopy_threshold": 0, 00:04:38.076 "tls_version": 0, 00:04:38.076 "enable_ktls": false 00:04:38.076 } 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "method": "sock_impl_set_options", 00:04:38.076 "params": { 00:04:38.076 "impl_name": "posix", 00:04:38.076 "recv_buf_size": 2097152, 00:04:38.076 "send_buf_size": 2097152, 00:04:38.076 "enable_recv_pipe": true, 00:04:38.076 "enable_quickack": false, 00:04:38.076 "enable_placement_id": 0, 00:04:38.076 "enable_zerocopy_send_server": true, 00:04:38.076 "enable_zerocopy_send_client": false, 00:04:38.076 "zerocopy_threshold": 0, 00:04:38.076 "tls_version": 0, 00:04:38.076 "enable_ktls": false 00:04:38.076 } 00:04:38.076 } 00:04:38.076 ] 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "subsystem": "vmd", 00:04:38.076 "config": [] 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "subsystem": "accel", 00:04:38.076 "config": [ 00:04:38.076 { 00:04:38.076 "method": "accel_set_options", 00:04:38.076 "params": { 00:04:38.076 "small_cache_size": 128, 00:04:38.076 "large_cache_size": 16, 00:04:38.076 "task_count": 2048, 00:04:38.076 "sequence_count": 2048, 00:04:38.076 "buf_count": 2048 00:04:38.076 } 00:04:38.076 } 00:04:38.076 ] 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "subsystem": "bdev", 00:04:38.076 "config": [ 00:04:38.076 { 00:04:38.076 "method": "bdev_set_options", 00:04:38.076 "params": { 00:04:38.076 "bdev_io_pool_size": 65535, 00:04:38.076 "bdev_io_cache_size": 256, 00:04:38.076 "bdev_auto_examine": true, 00:04:38.076 "iobuf_small_cache_size": 128, 00:04:38.076 "iobuf_large_cache_size": 16 00:04:38.076 } 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "method": "bdev_raid_set_options", 00:04:38.076 "params": { 00:04:38.076 "process_window_size_kb": 1024, 00:04:38.076 "process_max_bandwidth_mb_sec": 0 00:04:38.076 } 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "method": "bdev_iscsi_set_options", 00:04:38.076 "params": { 00:04:38.076 "timeout_sec": 30 00:04:38.076 } 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "method": "bdev_nvme_set_options", 00:04:38.076 "params": { 00:04:38.076 "action_on_timeout": "none", 00:04:38.076 "timeout_us": 0, 00:04:38.076 "timeout_admin_us": 0, 00:04:38.076 "keep_alive_timeout_ms": 10000, 00:04:38.076 "arbitration_burst": 0, 00:04:38.076 "low_priority_weight": 0, 00:04:38.076 "medium_priority_weight": 0, 00:04:38.076 "high_priority_weight": 0, 00:04:38.076 "nvme_adminq_poll_period_us": 10000, 00:04:38.076 "nvme_ioq_poll_period_us": 0, 00:04:38.076 "io_queue_requests": 0, 00:04:38.076 "delay_cmd_submit": true, 00:04:38.076 "transport_retry_count": 4, 00:04:38.076 "bdev_retry_count": 3, 00:04:38.076 "transport_ack_timeout": 0, 00:04:38.076 "ctrlr_loss_timeout_sec": 0, 00:04:38.076 "reconnect_delay_sec": 0, 00:04:38.076 "fast_io_fail_timeout_sec": 0, 00:04:38.076 "disable_auto_failback": false, 00:04:38.076 "generate_uuids": false, 00:04:38.076 "transport_tos": 0, 00:04:38.076 "nvme_error_stat": false, 00:04:38.076 "rdma_srq_size": 0, 00:04:38.076 "io_path_stat": false, 00:04:38.076 "allow_accel_sequence": false, 00:04:38.076 "rdma_max_cq_size": 0, 00:04:38.076 "rdma_cm_event_timeout_ms": 0, 00:04:38.076 "dhchap_digests": [ 00:04:38.076 "sha256", 00:04:38.076 "sha384", 00:04:38.076 "sha512" 00:04:38.076 ], 00:04:38.076 "dhchap_dhgroups": [ 00:04:38.076 "null", 00:04:38.076 "ffdhe2048", 00:04:38.076 "ffdhe3072", 00:04:38.076 "ffdhe4096", 00:04:38.076 "ffdhe6144", 00:04:38.076 "ffdhe8192" 00:04:38.076 ] 00:04:38.076 } 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "method": "bdev_nvme_set_hotplug", 00:04:38.076 "params": { 00:04:38.076 "period_us": 100000, 00:04:38.076 "enable": false 00:04:38.076 } 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "method": "bdev_wait_for_examine" 00:04:38.076 } 00:04:38.076 ] 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "subsystem": "scsi", 00:04:38.076 "config": null 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "subsystem": "scheduler", 00:04:38.076 "config": [ 00:04:38.076 { 00:04:38.076 "method": "framework_set_scheduler", 00:04:38.076 "params": { 00:04:38.076 "name": "static" 00:04:38.076 } 00:04:38.076 } 00:04:38.076 ] 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "subsystem": "vhost_scsi", 00:04:38.076 "config": [] 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "subsystem": "vhost_blk", 00:04:38.076 "config": [] 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "subsystem": "ublk", 00:04:38.076 "config": [] 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "subsystem": "nbd", 00:04:38.076 "config": [] 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "subsystem": "nvmf", 00:04:38.076 "config": [ 00:04:38.076 { 00:04:38.076 "method": "nvmf_set_config", 00:04:38.076 "params": { 00:04:38.076 "discovery_filter": "match_any", 00:04:38.076 "admin_cmd_passthru": { 00:04:38.076 "identify_ctrlr": false 00:04:38.076 }, 00:04:38.076 "dhchap_digests": [ 00:04:38.076 "sha256", 00:04:38.076 "sha384", 00:04:38.076 "sha512" 00:04:38.076 ], 00:04:38.076 "dhchap_dhgroups": [ 00:04:38.076 "null", 00:04:38.076 "ffdhe2048", 00:04:38.076 "ffdhe3072", 00:04:38.076 "ffdhe4096", 00:04:38.076 "ffdhe6144", 00:04:38.076 "ffdhe8192" 00:04:38.076 ] 00:04:38.076 } 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "method": "nvmf_set_max_subsystems", 00:04:38.076 "params": { 00:04:38.076 "max_subsystems": 1024 00:04:38.076 } 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "method": "nvmf_set_crdt", 00:04:38.076 "params": { 00:04:38.076 "crdt1": 0, 00:04:38.076 "crdt2": 0, 00:04:38.076 "crdt3": 0 00:04:38.076 } 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "method": "nvmf_create_transport", 00:04:38.076 "params": { 00:04:38.076 "trtype": "TCP", 00:04:38.076 "max_queue_depth": 128, 00:04:38.076 "max_io_qpairs_per_ctrlr": 127, 00:04:38.076 "in_capsule_data_size": 4096, 00:04:38.076 "max_io_size": 131072, 00:04:38.076 "io_unit_size": 131072, 00:04:38.076 "max_aq_depth": 128, 00:04:38.076 "num_shared_buffers": 511, 00:04:38.076 "buf_cache_size": 4294967295, 00:04:38.076 "dif_insert_or_strip": false, 00:04:38.076 "zcopy": false, 00:04:38.076 "c2h_success": true, 00:04:38.076 "sock_priority": 0, 00:04:38.076 "abort_timeout_sec": 1, 00:04:38.076 "ack_timeout": 0, 00:04:38.076 "data_wr_pool_size": 0 00:04:38.076 } 00:04:38.076 } 00:04:38.076 ] 00:04:38.076 }, 00:04:38.076 { 00:04:38.076 "subsystem": "iscsi", 00:04:38.076 "config": [ 00:04:38.076 { 00:04:38.076 "method": "iscsi_set_options", 00:04:38.076 "params": { 00:04:38.076 "node_base": "iqn.2016-06.io.spdk", 00:04:38.076 "max_sessions": 128, 00:04:38.076 "max_connections_per_session": 2, 00:04:38.076 "max_queue_depth": 64, 00:04:38.077 "default_time2wait": 2, 00:04:38.077 "default_time2retain": 20, 00:04:38.077 "first_burst_length": 8192, 00:04:38.077 "immediate_data": true, 00:04:38.077 "allow_duplicated_isid": false, 00:04:38.077 "error_recovery_level": 0, 00:04:38.077 "nop_timeout": 60, 00:04:38.077 "nop_in_interval": 30, 00:04:38.077 "disable_chap": false, 00:04:38.077 "require_chap": false, 00:04:38.077 "mutual_chap": false, 00:04:38.077 "chap_group": 0, 00:04:38.077 "max_large_datain_per_connection": 64, 00:04:38.077 "max_r2t_per_connection": 4, 00:04:38.077 "pdu_pool_size": 36864, 00:04:38.077 "immediate_data_pool_size": 16384, 00:04:38.077 "data_out_pool_size": 2048 00:04:38.077 } 00:04:38.077 } 00:04:38.077 ] 00:04:38.077 } 00:04:38.077 ] 00:04:38.077 } 00:04:38.077 12:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:38.077 12:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 895291 00:04:38.077 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 895291 ']' 00:04:38.077 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 895291 00:04:38.077 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:38.077 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.077 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 895291 00:04:38.077 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.077 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.077 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 895291' 00:04:38.077 killing process with pid 895291 00:04:38.077 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 895291 00:04:38.077 12:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 895291 00:04:38.642 12:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=895431 00:04:38.642 12:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:38.642 12:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:43.902 12:26:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 895431 00:04:43.902 12:26:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 895431 ']' 00:04:43.902 12:26:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 895431 00:04:43.902 12:26:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:43.902 12:26:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.902 12:26:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 895431 00:04:43.902 12:26:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.902 12:26:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.902 12:26:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 895431' 00:04:43.902 killing process with pid 895431 00:04:43.902 12:26:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 895431 00:04:43.902 12:26:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 895431 00:04:43.902 12:26:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:43.902 12:26:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:43.902 00:04:43.902 real 0m6.536s 00:04:43.902 user 0m6.215s 00:04:43.902 sys 0m0.663s 00:04:43.902 12:26:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.902 12:26:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.902 ************************************ 00:04:43.902 END TEST skip_rpc_with_json 00:04:43.902 ************************************ 00:04:43.902 12:26:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:43.902 12:26:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.902 12:26:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.902 12:26:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.160 ************************************ 00:04:44.160 START TEST skip_rpc_with_delay 00:04:44.160 ************************************ 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:44.160 [2024-11-15 12:26:24.324453] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.160 00:04:44.160 real 0m0.073s 00:04:44.160 user 0m0.043s 00:04:44.160 sys 0m0.030s 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.160 12:26:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:44.160 ************************************ 00:04:44.160 END TEST skip_rpc_with_delay 00:04:44.160 ************************************ 00:04:44.160 12:26:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:44.160 12:26:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:44.160 12:26:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:44.160 12:26:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.160 12:26:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.160 12:26:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.160 ************************************ 00:04:44.160 START TEST exit_on_failed_rpc_init 00:04:44.160 ************************************ 00:04:44.160 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:44.160 12:26:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=896149 00:04:44.160 12:26:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.160 12:26:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 896149 00:04:44.160 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 896149 ']' 00:04:44.160 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.161 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.161 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.161 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.161 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.161 [2024-11-15 12:26:24.449335] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:04:44.161 [2024-11-15 12:26:24.449412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid896149 ] 00:04:44.418 [2024-11-15 12:26:24.515163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.418 [2024-11-15 12:26:24.574357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.677 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.677 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:44.677 12:26:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.677 12:26:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:44.677 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:44.677 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:44.677 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.677 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.677 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.677 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.677 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.677 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.677 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.677 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:44.677 12:26:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:44.677 [2024-11-15 12:26:24.893606] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:04:44.677 [2024-11-15 12:26:24.893712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid896277 ] 00:04:44.677 [2024-11-15 12:26:24.961087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.935 [2024-11-15 12:26:25.020849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.935 [2024-11-15 12:26:25.020950] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:44.935 [2024-11-15 12:26:25.020970] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:44.935 [2024-11-15 12:26:25.020982] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 896149 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 896149 ']' 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 896149 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 896149 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 896149' 00:04:44.935 killing process with pid 896149 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 896149 00:04:44.935 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 896149 00:04:45.501 00:04:45.501 real 0m1.156s 00:04:45.501 user 0m1.299s 00:04:45.501 sys 0m0.412s 00:04:45.502 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.502 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:45.502 ************************************ 00:04:45.502 END TEST exit_on_failed_rpc_init 00:04:45.502 ************************************ 00:04:45.502 12:26:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:45.502 00:04:45.502 real 0m13.579s 00:04:45.502 user 0m12.885s 00:04:45.502 sys 0m1.619s 00:04:45.502 12:26:25 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.502 12:26:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.502 ************************************ 00:04:45.502 END TEST skip_rpc 00:04:45.502 ************************************ 00:04:45.502 12:26:25 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:45.502 12:26:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.502 12:26:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.502 12:26:25 -- common/autotest_common.sh@10 -- # set +x 00:04:45.502 ************************************ 00:04:45.502 START TEST rpc_client 00:04:45.502 ************************************ 00:04:45.502 12:26:25 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:45.502 * Looking for test storage... 00:04:45.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:45.502 12:26:25 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.502 12:26:25 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.502 12:26:25 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.502 12:26:25 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.502 12:26:25 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:45.502 12:26:25 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.502 12:26:25 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.502 --rc genhtml_branch_coverage=1 00:04:45.502 --rc genhtml_function_coverage=1 00:04:45.502 --rc genhtml_legend=1 00:04:45.502 --rc geninfo_all_blocks=1 00:04:45.502 --rc geninfo_unexecuted_blocks=1 00:04:45.502 00:04:45.502 ' 00:04:45.502 12:26:25 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.502 --rc genhtml_branch_coverage=1 00:04:45.502 --rc genhtml_function_coverage=1 00:04:45.502 --rc genhtml_legend=1 00:04:45.502 --rc geninfo_all_blocks=1 00:04:45.502 --rc geninfo_unexecuted_blocks=1 00:04:45.502 00:04:45.502 ' 00:04:45.502 12:26:25 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.502 --rc genhtml_branch_coverage=1 00:04:45.502 --rc genhtml_function_coverage=1 00:04:45.502 --rc genhtml_legend=1 00:04:45.502 --rc geninfo_all_blocks=1 00:04:45.502 --rc geninfo_unexecuted_blocks=1 00:04:45.502 00:04:45.502 ' 00:04:45.502 12:26:25 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.502 --rc genhtml_branch_coverage=1 00:04:45.502 --rc genhtml_function_coverage=1 00:04:45.502 --rc genhtml_legend=1 00:04:45.502 --rc geninfo_all_blocks=1 00:04:45.502 --rc geninfo_unexecuted_blocks=1 00:04:45.502 00:04:45.502 ' 00:04:45.502 12:26:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:45.502 OK 00:04:45.502 12:26:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:45.502 00:04:45.502 real 0m0.151s 00:04:45.502 user 0m0.102s 00:04:45.502 sys 0m0.057s 00:04:45.502 12:26:25 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.502 12:26:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:45.502 ************************************ 00:04:45.502 END TEST rpc_client 00:04:45.502 ************************************ 00:04:45.502 12:26:25 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:45.502 12:26:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.502 12:26:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.502 12:26:25 -- common/autotest_common.sh@10 -- # set +x 00:04:45.502 ************************************ 00:04:45.502 START TEST json_config 00:04:45.502 ************************************ 00:04:45.502 12:26:25 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:45.761 12:26:25 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.761 12:26:25 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.761 12:26:25 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.761 12:26:25 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.761 12:26:25 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.761 12:26:25 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.761 12:26:25 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.761 12:26:25 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.761 12:26:25 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.761 12:26:25 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.761 12:26:25 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.761 12:26:25 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.761 12:26:25 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.761 12:26:25 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.761 12:26:25 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.761 12:26:25 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:45.761 12:26:25 json_config -- scripts/common.sh@345 -- # : 1 00:04:45.761 12:26:25 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.761 12:26:25 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.761 12:26:25 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:45.761 12:26:25 json_config -- scripts/common.sh@353 -- # local d=1 00:04:45.761 12:26:25 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.761 12:26:25 json_config -- scripts/common.sh@355 -- # echo 1 00:04:45.761 12:26:25 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.761 12:26:25 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:45.761 12:26:25 json_config -- scripts/common.sh@353 -- # local d=2 00:04:45.761 12:26:25 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.761 12:26:25 json_config -- scripts/common.sh@355 -- # echo 2 00:04:45.761 12:26:25 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.761 12:26:25 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.761 12:26:25 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.761 12:26:25 json_config -- scripts/common.sh@368 -- # return 0 00:04:45.761 12:26:25 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.761 12:26:25 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.761 --rc genhtml_branch_coverage=1 00:04:45.761 --rc genhtml_function_coverage=1 00:04:45.761 --rc genhtml_legend=1 00:04:45.761 --rc geninfo_all_blocks=1 00:04:45.761 --rc geninfo_unexecuted_blocks=1 00:04:45.761 00:04:45.761 ' 00:04:45.761 12:26:25 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.761 --rc genhtml_branch_coverage=1 00:04:45.761 --rc genhtml_function_coverage=1 00:04:45.761 --rc genhtml_legend=1 00:04:45.761 --rc geninfo_all_blocks=1 00:04:45.761 --rc geninfo_unexecuted_blocks=1 00:04:45.761 00:04:45.761 ' 00:04:45.761 12:26:25 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.761 --rc genhtml_branch_coverage=1 00:04:45.761 --rc genhtml_function_coverage=1 00:04:45.761 --rc genhtml_legend=1 00:04:45.761 --rc geninfo_all_blocks=1 00:04:45.761 --rc geninfo_unexecuted_blocks=1 00:04:45.761 00:04:45.761 ' 00:04:45.761 12:26:25 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.761 --rc genhtml_branch_coverage=1 00:04:45.761 --rc genhtml_function_coverage=1 00:04:45.761 --rc genhtml_legend=1 00:04:45.761 --rc geninfo_all_blocks=1 00:04:45.761 --rc geninfo_unexecuted_blocks=1 00:04:45.761 00:04:45.761 ' 00:04:45.761 12:26:25 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:45.761 12:26:25 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:45.761 12:26:25 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.761 12:26:25 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.761 12:26:25 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.761 12:26:25 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.761 12:26:25 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.761 12:26:25 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.761 12:26:25 json_config -- paths/export.sh@5 -- # export PATH 00:04:45.761 12:26:25 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@51 -- # : 0 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.761 12:26:25 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:45.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:45.762 12:26:25 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:45.762 12:26:25 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:45.762 12:26:25 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:45.762 INFO: JSON configuration test init 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:45.762 12:26:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.762 12:26:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:45.762 12:26:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.762 12:26:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.762 12:26:25 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:45.762 12:26:25 json_config -- json_config/common.sh@9 -- # local app=target 00:04:45.762 12:26:25 json_config -- json_config/common.sh@10 -- # shift 00:04:45.762 12:26:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:45.762 12:26:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:45.762 12:26:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:45.762 12:26:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.762 12:26:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.762 12:26:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=896534 00:04:45.762 12:26:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:45.762 12:26:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:45.762 Waiting for target to run... 00:04:45.762 12:26:25 json_config -- json_config/common.sh@25 -- # waitforlisten 896534 /var/tmp/spdk_tgt.sock 00:04:45.762 12:26:25 json_config -- common/autotest_common.sh@835 -- # '[' -z 896534 ']' 00:04:45.762 12:26:25 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.762 12:26:25 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.762 12:26:25 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.762 12:26:25 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.762 12:26:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.762 [2024-11-15 12:26:26.013178] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:04:45.762 [2024-11-15 12:26:26.013253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid896534 ] 00:04:46.019 [2024-11-15 12:26:26.349647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.277 [2024-11-15 12:26:26.393562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.841 12:26:26 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.841 12:26:26 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:46.841 12:26:26 json_config -- json_config/common.sh@26 -- # echo '' 00:04:46.841 00:04:46.841 12:26:26 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:46.841 12:26:26 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:46.841 12:26:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.841 12:26:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.841 12:26:27 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:46.841 12:26:27 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:46.841 12:26:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.841 12:26:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.841 12:26:27 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:46.841 12:26:27 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:46.841 12:26:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:50.126 12:26:30 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:50.126 12:26:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:50.126 12:26:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.126 12:26:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.126 12:26:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:50.126 12:26:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:50.126 12:26:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:50.126 12:26:30 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:50.126 12:26:30 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:50.126 12:26:30 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:50.126 12:26:30 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:50.126 12:26:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@54 -- # sort 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:50.385 12:26:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:50.385 12:26:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:50.385 12:26:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.385 12:26:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:50.385 12:26:30 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:50.385 12:26:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:50.643 MallocForNvmf0 00:04:50.644 12:26:30 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:50.644 12:26:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:50.902 MallocForNvmf1 00:04:50.902 12:26:31 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:50.902 12:26:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:51.160 [2024-11-15 12:26:31.325836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.160 12:26:31 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:51.160 12:26:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:51.419 12:26:31 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:51.419 12:26:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:51.676 12:26:31 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:51.677 12:26:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:51.934 12:26:32 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:51.934 12:26:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:52.192 [2024-11-15 12:26:32.401307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:52.192 12:26:32 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:52.192 12:26:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:52.192 12:26:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.192 12:26:32 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:52.192 12:26:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:52.192 12:26:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.192 12:26:32 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:52.192 12:26:32 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:52.192 12:26:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:52.449 MallocBdevForConfigChangeCheck 00:04:52.449 12:26:32 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:52.449 12:26:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:52.449 12:26:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.450 12:26:32 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:52.450 12:26:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:53.014 12:26:33 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:53.014 INFO: shutting down applications... 00:04:53.014 12:26:33 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:53.014 12:26:33 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:53.014 12:26:33 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:53.014 12:26:33 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:54.912 Calling clear_iscsi_subsystem 00:04:54.913 Calling clear_nvmf_subsystem 00:04:54.913 Calling clear_nbd_subsystem 00:04:54.913 Calling clear_ublk_subsystem 00:04:54.913 Calling clear_vhost_blk_subsystem 00:04:54.913 Calling clear_vhost_scsi_subsystem 00:04:54.913 Calling clear_bdev_subsystem 00:04:54.913 12:26:34 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:54.913 12:26:34 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:54.913 12:26:34 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:54.913 12:26:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.913 12:26:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:54.913 12:26:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:54.913 12:26:35 json_config -- json_config/json_config.sh@352 -- # break 00:04:54.913 12:26:35 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:54.913 12:26:35 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:54.913 12:26:35 json_config -- json_config/common.sh@31 -- # local app=target 00:04:54.913 12:26:35 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:54.913 12:26:35 json_config -- json_config/common.sh@35 -- # [[ -n 896534 ]] 00:04:54.913 12:26:35 json_config -- json_config/common.sh@38 -- # kill -SIGINT 896534 00:04:54.913 12:26:35 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:54.913 12:26:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.913 12:26:35 json_config -- json_config/common.sh@41 -- # kill -0 896534 00:04:54.913 12:26:35 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.481 12:26:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.481 12:26:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.481 12:26:35 json_config -- json_config/common.sh@41 -- # kill -0 896534 00:04:55.481 12:26:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:55.481 12:26:35 json_config -- json_config/common.sh@43 -- # break 00:04:55.481 12:26:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:55.481 12:26:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:55.481 SPDK target shutdown done 00:04:55.481 12:26:35 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:55.481 INFO: relaunching applications... 00:04:55.481 12:26:35 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.481 12:26:35 json_config -- json_config/common.sh@9 -- # local app=target 00:04:55.481 12:26:35 json_config -- json_config/common.sh@10 -- # shift 00:04:55.481 12:26:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.481 12:26:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.481 12:26:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.481 12:26:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.481 12:26:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.481 12:26:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=897743 00:04:55.481 12:26:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.481 12:26:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.481 Waiting for target to run... 00:04:55.481 12:26:35 json_config -- json_config/common.sh@25 -- # waitforlisten 897743 /var/tmp/spdk_tgt.sock 00:04:55.481 12:26:35 json_config -- common/autotest_common.sh@835 -- # '[' -z 897743 ']' 00:04:55.481 12:26:35 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.481 12:26:35 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.481 12:26:35 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.481 12:26:35 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.481 12:26:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.481 [2024-11-15 12:26:35.800305] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:04:55.481 [2024-11-15 12:26:35.800397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid897743 ] 00:04:56.049 [2024-11-15 12:26:36.324252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.049 [2024-11-15 12:26:36.375191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.331 [2024-11-15 12:26:39.430929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.331 [2024-11-15 12:26:39.463391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:59.331 12:26:39 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.331 12:26:39 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:59.331 12:26:39 json_config -- json_config/common.sh@26 -- # echo '' 00:04:59.331 00:04:59.331 12:26:39 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:59.331 12:26:39 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:59.331 INFO: Checking if target configuration is the same... 00:04:59.331 12:26:39 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.331 12:26:39 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:59.331 12:26:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:59.331 + '[' 2 -ne 2 ']' 00:04:59.331 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:59.331 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:59.331 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:59.331 +++ basename /dev/fd/62 00:04:59.331 ++ mktemp /tmp/62.XXX 00:04:59.331 + tmp_file_1=/tmp/62.MIR 00:04:59.331 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.331 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:59.331 + tmp_file_2=/tmp/spdk_tgt_config.json.IAa 00:04:59.331 + ret=0 00:04:59.331 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:59.589 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:59.847 + diff -u /tmp/62.MIR /tmp/spdk_tgt_config.json.IAa 00:04:59.847 + echo 'INFO: JSON config files are the same' 00:04:59.847 INFO: JSON config files are the same 00:04:59.847 + rm /tmp/62.MIR /tmp/spdk_tgt_config.json.IAa 00:04:59.847 + exit 0 00:04:59.847 12:26:39 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:59.847 12:26:39 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:59.847 INFO: changing configuration and checking if this can be detected... 00:04:59.847 12:26:39 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:59.847 12:26:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:00.106 12:26:40 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.106 12:26:40 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:00.106 12:26:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:00.106 + '[' 2 -ne 2 ']' 00:05:00.106 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:00.106 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:00.106 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:00.106 +++ basename /dev/fd/62 00:05:00.106 ++ mktemp /tmp/62.XXX 00:05:00.106 + tmp_file_1=/tmp/62.AfU 00:05:00.106 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.106 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:00.106 + tmp_file_2=/tmp/spdk_tgt_config.json.X22 00:05:00.106 + ret=0 00:05:00.106 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:00.363 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:00.363 + diff -u /tmp/62.AfU /tmp/spdk_tgt_config.json.X22 00:05:00.363 + ret=1 00:05:00.363 + echo '=== Start of file: /tmp/62.AfU ===' 00:05:00.364 + cat /tmp/62.AfU 00:05:00.622 + echo '=== End of file: /tmp/62.AfU ===' 00:05:00.622 + echo '' 00:05:00.622 + echo '=== Start of file: /tmp/spdk_tgt_config.json.X22 ===' 00:05:00.622 + cat /tmp/spdk_tgt_config.json.X22 00:05:00.622 + echo '=== End of file: /tmp/spdk_tgt_config.json.X22 ===' 00:05:00.622 + echo '' 00:05:00.622 + rm /tmp/62.AfU /tmp/spdk_tgt_config.json.X22 00:05:00.622 + exit 1 00:05:00.622 12:26:40 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:00.622 INFO: configuration change detected. 00:05:00.622 12:26:40 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:00.622 12:26:40 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.622 12:26:40 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:00.622 12:26:40 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:00.622 12:26:40 json_config -- json_config/json_config.sh@324 -- # [[ -n 897743 ]] 00:05:00.622 12:26:40 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:00.622 12:26:40 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.622 12:26:40 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:00.622 12:26:40 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:00.622 12:26:40 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:00.622 12:26:40 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:00.622 12:26:40 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:00.622 12:26:40 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.622 12:26:40 json_config -- json_config/json_config.sh@330 -- # killprocess 897743 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@954 -- # '[' -z 897743 ']' 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@958 -- # kill -0 897743 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@959 -- # uname 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 897743 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 897743' 00:05:00.622 killing process with pid 897743 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@973 -- # kill 897743 00:05:00.622 12:26:40 json_config -- common/autotest_common.sh@978 -- # wait 897743 00:05:02.520 12:26:42 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:02.520 12:26:42 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:02.520 12:26:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:02.520 12:26:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.520 12:26:42 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:02.520 12:26:42 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:02.520 INFO: Success 00:05:02.520 00:05:02.520 real 0m16.597s 00:05:02.520 user 0m18.233s 00:05:02.520 sys 0m2.587s 00:05:02.520 12:26:42 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.520 12:26:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.520 ************************************ 00:05:02.520 END TEST json_config 00:05:02.520 ************************************ 00:05:02.520 12:26:42 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:02.520 12:26:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.520 12:26:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.520 12:26:42 -- common/autotest_common.sh@10 -- # set +x 00:05:02.520 ************************************ 00:05:02.520 START TEST json_config_extra_key 00:05:02.520 ************************************ 00:05:02.520 12:26:42 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:02.520 12:26:42 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.520 12:26:42 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.520 12:26:42 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.520 12:26:42 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:02.520 12:26:42 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.520 12:26:42 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.520 --rc genhtml_branch_coverage=1 00:05:02.520 --rc genhtml_function_coverage=1 00:05:02.520 --rc genhtml_legend=1 00:05:02.520 --rc geninfo_all_blocks=1 00:05:02.520 --rc geninfo_unexecuted_blocks=1 00:05:02.520 00:05:02.520 ' 00:05:02.520 12:26:42 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.520 --rc genhtml_branch_coverage=1 00:05:02.520 --rc genhtml_function_coverage=1 00:05:02.520 --rc genhtml_legend=1 00:05:02.520 --rc geninfo_all_blocks=1 00:05:02.520 --rc geninfo_unexecuted_blocks=1 00:05:02.520 00:05:02.520 ' 00:05:02.520 12:26:42 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.520 --rc genhtml_branch_coverage=1 00:05:02.520 --rc genhtml_function_coverage=1 00:05:02.520 --rc genhtml_legend=1 00:05:02.520 --rc geninfo_all_blocks=1 00:05:02.520 --rc geninfo_unexecuted_blocks=1 00:05:02.520 00:05:02.520 ' 00:05:02.520 12:26:42 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.520 --rc genhtml_branch_coverage=1 00:05:02.520 --rc genhtml_function_coverage=1 00:05:02.520 --rc genhtml_legend=1 00:05:02.520 --rc geninfo_all_blocks=1 00:05:02.520 --rc geninfo_unexecuted_blocks=1 00:05:02.520 00:05:02.520 ' 00:05:02.520 12:26:42 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.520 12:26:42 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.520 12:26:42 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.520 12:26:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.521 12:26:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.521 12:26:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.521 12:26:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:02.521 12:26:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.521 12:26:42 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:02.521 12:26:42 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.521 12:26:42 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.521 12:26:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.521 12:26:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.521 12:26:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.521 12:26:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.521 12:26:42 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.521 12:26:42 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.521 12:26:42 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.521 12:26:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:02.521 12:26:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:02.521 12:26:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:02.521 12:26:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:02.521 12:26:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:02.521 12:26:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:02.521 12:26:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:02.521 12:26:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:02.521 12:26:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:02.521 12:26:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:02.521 12:26:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:02.521 INFO: launching applications... 00:05:02.521 12:26:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:02.521 12:26:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:02.521 12:26:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:02.521 12:26:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:02.521 12:26:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:02.521 12:26:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:02.521 12:26:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.521 12:26:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.521 12:26:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=898684 00:05:02.521 12:26:42 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:02.521 12:26:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:02.521 Waiting for target to run... 00:05:02.521 12:26:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 898684 /var/tmp/spdk_tgt.sock 00:05:02.521 12:26:42 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 898684 ']' 00:05:02.521 12:26:42 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:02.521 12:26:42 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.521 12:26:42 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:02.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:02.521 12:26:42 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.521 12:26:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:02.521 [2024-11-15 12:26:42.667754] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:02.521 [2024-11-15 12:26:42.667835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898684 ] 00:05:02.779 [2024-11-15 12:26:43.002410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.779 [2024-11-15 12:26:43.043782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.344 12:26:43 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.344 12:26:43 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:03.344 12:26:43 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:03.344 00:05:03.344 12:26:43 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:03.344 INFO: shutting down applications... 00:05:03.344 12:26:43 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:03.344 12:26:43 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:03.344 12:26:43 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:03.344 12:26:43 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 898684 ]] 00:05:03.344 12:26:43 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 898684 00:05:03.344 12:26:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:03.344 12:26:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.344 12:26:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 898684 00:05:03.345 12:26:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.910 12:26:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.910 12:26:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.910 12:26:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 898684 00:05:03.911 12:26:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:03.911 12:26:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:03.911 12:26:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:03.911 12:26:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:03.911 SPDK target shutdown done 00:05:03.911 12:26:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:03.911 Success 00:05:03.911 00:05:03.911 real 0m1.704s 00:05:03.911 user 0m1.709s 00:05:03.911 sys 0m0.454s 00:05:03.911 12:26:44 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.911 12:26:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:03.911 ************************************ 00:05:03.911 END TEST json_config_extra_key 00:05:03.911 ************************************ 00:05:03.911 12:26:44 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:03.911 12:26:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.911 12:26:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.911 12:26:44 -- common/autotest_common.sh@10 -- # set +x 00:05:03.911 ************************************ 00:05:03.911 START TEST alias_rpc 00:05:03.911 ************************************ 00:05:03.911 12:26:44 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:04.169 * Looking for test storage... 00:05:04.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:04.169 12:26:44 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.169 12:26:44 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.169 12:26:44 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.169 12:26:44 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.169 12:26:44 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:04.169 12:26:44 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.169 12:26:44 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.169 --rc genhtml_branch_coverage=1 00:05:04.169 --rc genhtml_function_coverage=1 00:05:04.169 --rc genhtml_legend=1 00:05:04.169 --rc geninfo_all_blocks=1 00:05:04.169 --rc geninfo_unexecuted_blocks=1 00:05:04.169 00:05:04.169 ' 00:05:04.169 12:26:44 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.169 --rc genhtml_branch_coverage=1 00:05:04.169 --rc genhtml_function_coverage=1 00:05:04.169 --rc genhtml_legend=1 00:05:04.169 --rc geninfo_all_blocks=1 00:05:04.169 --rc geninfo_unexecuted_blocks=1 00:05:04.169 00:05:04.169 ' 00:05:04.169 12:26:44 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.169 --rc genhtml_branch_coverage=1 00:05:04.169 --rc genhtml_function_coverage=1 00:05:04.169 --rc genhtml_legend=1 00:05:04.169 --rc geninfo_all_blocks=1 00:05:04.169 --rc geninfo_unexecuted_blocks=1 00:05:04.169 00:05:04.169 ' 00:05:04.169 12:26:44 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.169 --rc genhtml_branch_coverage=1 00:05:04.169 --rc genhtml_function_coverage=1 00:05:04.169 --rc genhtml_legend=1 00:05:04.169 --rc geninfo_all_blocks=1 00:05:04.169 --rc geninfo_unexecuted_blocks=1 00:05:04.169 00:05:04.169 ' 00:05:04.169 12:26:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:04.169 12:26:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=898995 00:05:04.169 12:26:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.169 12:26:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 898995 00:05:04.169 12:26:44 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 898995 ']' 00:05:04.169 12:26:44 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.169 12:26:44 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.169 12:26:44 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.169 12:26:44 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.169 12:26:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.170 [2024-11-15 12:26:44.422840] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:04.170 [2024-11-15 12:26:44.422951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898995 ] 00:05:04.170 [2024-11-15 12:26:44.487413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.428 [2024-11-15 12:26:44.544348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.685 12:26:44 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.685 12:26:44 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:04.685 12:26:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:04.943 12:26:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 898995 00:05:04.943 12:26:45 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 898995 ']' 00:05:04.943 12:26:45 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 898995 00:05:04.943 12:26:45 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:04.943 12:26:45 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.943 12:26:45 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 898995 00:05:04.943 12:26:45 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.943 12:26:45 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.943 12:26:45 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 898995' 00:05:04.943 killing process with pid 898995 00:05:04.943 12:26:45 alias_rpc -- common/autotest_common.sh@973 -- # kill 898995 00:05:04.943 12:26:45 alias_rpc -- common/autotest_common.sh@978 -- # wait 898995 00:05:05.511 00:05:05.511 real 0m1.326s 00:05:05.511 user 0m1.448s 00:05:05.511 sys 0m0.428s 00:05:05.511 12:26:45 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.511 12:26:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.511 ************************************ 00:05:05.511 END TEST alias_rpc 00:05:05.511 ************************************ 00:05:05.511 12:26:45 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:05.511 12:26:45 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:05.511 12:26:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.511 12:26:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.511 12:26:45 -- common/autotest_common.sh@10 -- # set +x 00:05:05.511 ************************************ 00:05:05.511 START TEST spdkcli_tcp 00:05:05.511 ************************************ 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:05.511 * Looking for test storage... 00:05:05.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.511 12:26:45 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.511 --rc genhtml_branch_coverage=1 00:05:05.511 --rc genhtml_function_coverage=1 00:05:05.511 --rc genhtml_legend=1 00:05:05.511 --rc geninfo_all_blocks=1 00:05:05.511 --rc geninfo_unexecuted_blocks=1 00:05:05.511 00:05:05.511 ' 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.511 --rc genhtml_branch_coverage=1 00:05:05.511 --rc genhtml_function_coverage=1 00:05:05.511 --rc genhtml_legend=1 00:05:05.511 --rc geninfo_all_blocks=1 00:05:05.511 --rc geninfo_unexecuted_blocks=1 00:05:05.511 00:05:05.511 ' 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.511 --rc genhtml_branch_coverage=1 00:05:05.511 --rc genhtml_function_coverage=1 00:05:05.511 --rc genhtml_legend=1 00:05:05.511 --rc geninfo_all_blocks=1 00:05:05.511 --rc geninfo_unexecuted_blocks=1 00:05:05.511 00:05:05.511 ' 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.511 --rc genhtml_branch_coverage=1 00:05:05.511 --rc genhtml_function_coverage=1 00:05:05.511 --rc genhtml_legend=1 00:05:05.511 --rc geninfo_all_blocks=1 00:05:05.511 --rc geninfo_unexecuted_blocks=1 00:05:05.511 00:05:05.511 ' 00:05:05.511 12:26:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:05.511 12:26:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:05.511 12:26:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:05.511 12:26:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:05.511 12:26:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:05.511 12:26:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:05.511 12:26:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.511 12:26:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=899196 00:05:05.511 12:26:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:05.511 12:26:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 899196 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 899196 ']' 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.511 12:26:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.511 [2024-11-15 12:26:45.807404] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:05.511 [2024-11-15 12:26:45.807485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid899196 ] 00:05:05.770 [2024-11-15 12:26:45.874225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.770 [2024-11-15 12:26:45.934846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.770 [2024-11-15 12:26:45.934851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.028 12:26:46 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.028 12:26:46 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:06.028 12:26:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=899320 00:05:06.028 12:26:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:06.028 12:26:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:06.286 [ 00:05:06.286 "bdev_malloc_delete", 00:05:06.286 "bdev_malloc_create", 00:05:06.286 "bdev_null_resize", 00:05:06.286 "bdev_null_delete", 00:05:06.286 "bdev_null_create", 00:05:06.286 "bdev_nvme_cuse_unregister", 00:05:06.286 "bdev_nvme_cuse_register", 00:05:06.286 "bdev_opal_new_user", 00:05:06.286 "bdev_opal_set_lock_state", 00:05:06.286 "bdev_opal_delete", 00:05:06.286 "bdev_opal_get_info", 00:05:06.286 "bdev_opal_create", 00:05:06.286 "bdev_nvme_opal_revert", 00:05:06.286 "bdev_nvme_opal_init", 00:05:06.286 "bdev_nvme_send_cmd", 00:05:06.286 "bdev_nvme_set_keys", 00:05:06.286 "bdev_nvme_get_path_iostat", 00:05:06.286 "bdev_nvme_get_mdns_discovery_info", 00:05:06.286 "bdev_nvme_stop_mdns_discovery", 00:05:06.286 "bdev_nvme_start_mdns_discovery", 00:05:06.286 "bdev_nvme_set_multipath_policy", 00:05:06.286 "bdev_nvme_set_preferred_path", 00:05:06.286 "bdev_nvme_get_io_paths", 00:05:06.286 "bdev_nvme_remove_error_injection", 00:05:06.286 "bdev_nvme_add_error_injection", 00:05:06.286 "bdev_nvme_get_discovery_info", 00:05:06.286 "bdev_nvme_stop_discovery", 00:05:06.286 "bdev_nvme_start_discovery", 00:05:06.286 "bdev_nvme_get_controller_health_info", 00:05:06.286 "bdev_nvme_disable_controller", 00:05:06.286 "bdev_nvme_enable_controller", 00:05:06.286 "bdev_nvme_reset_controller", 00:05:06.286 "bdev_nvme_get_transport_statistics", 00:05:06.286 "bdev_nvme_apply_firmware", 00:05:06.286 "bdev_nvme_detach_controller", 00:05:06.286 "bdev_nvme_get_controllers", 00:05:06.286 "bdev_nvme_attach_controller", 00:05:06.286 "bdev_nvme_set_hotplug", 00:05:06.286 "bdev_nvme_set_options", 00:05:06.286 "bdev_passthru_delete", 00:05:06.286 "bdev_passthru_create", 00:05:06.286 "bdev_lvol_set_parent_bdev", 00:05:06.286 "bdev_lvol_set_parent", 00:05:06.286 "bdev_lvol_check_shallow_copy", 00:05:06.286 "bdev_lvol_start_shallow_copy", 00:05:06.286 "bdev_lvol_grow_lvstore", 00:05:06.286 "bdev_lvol_get_lvols", 00:05:06.286 "bdev_lvol_get_lvstores", 00:05:06.286 "bdev_lvol_delete", 00:05:06.286 "bdev_lvol_set_read_only", 00:05:06.286 "bdev_lvol_resize", 00:05:06.286 "bdev_lvol_decouple_parent", 00:05:06.286 "bdev_lvol_inflate", 00:05:06.286 "bdev_lvol_rename", 00:05:06.286 "bdev_lvol_clone_bdev", 00:05:06.286 "bdev_lvol_clone", 00:05:06.286 "bdev_lvol_snapshot", 00:05:06.286 "bdev_lvol_create", 00:05:06.286 "bdev_lvol_delete_lvstore", 00:05:06.286 "bdev_lvol_rename_lvstore", 00:05:06.286 "bdev_lvol_create_lvstore", 00:05:06.286 "bdev_raid_set_options", 00:05:06.286 "bdev_raid_remove_base_bdev", 00:05:06.286 "bdev_raid_add_base_bdev", 00:05:06.286 "bdev_raid_delete", 00:05:06.286 "bdev_raid_create", 00:05:06.286 "bdev_raid_get_bdevs", 00:05:06.286 "bdev_error_inject_error", 00:05:06.286 "bdev_error_delete", 00:05:06.286 "bdev_error_create", 00:05:06.286 "bdev_split_delete", 00:05:06.286 "bdev_split_create", 00:05:06.286 "bdev_delay_delete", 00:05:06.286 "bdev_delay_create", 00:05:06.286 "bdev_delay_update_latency", 00:05:06.286 "bdev_zone_block_delete", 00:05:06.286 "bdev_zone_block_create", 00:05:06.286 "blobfs_create", 00:05:06.286 "blobfs_detect", 00:05:06.286 "blobfs_set_cache_size", 00:05:06.286 "bdev_aio_delete", 00:05:06.286 "bdev_aio_rescan", 00:05:06.286 "bdev_aio_create", 00:05:06.286 "bdev_ftl_set_property", 00:05:06.286 "bdev_ftl_get_properties", 00:05:06.286 "bdev_ftl_get_stats", 00:05:06.286 "bdev_ftl_unmap", 00:05:06.286 "bdev_ftl_unload", 00:05:06.286 "bdev_ftl_delete", 00:05:06.286 "bdev_ftl_load", 00:05:06.286 "bdev_ftl_create", 00:05:06.286 "bdev_virtio_attach_controller", 00:05:06.286 "bdev_virtio_scsi_get_devices", 00:05:06.286 "bdev_virtio_detach_controller", 00:05:06.286 "bdev_virtio_blk_set_hotplug", 00:05:06.286 "bdev_iscsi_delete", 00:05:06.286 "bdev_iscsi_create", 00:05:06.286 "bdev_iscsi_set_options", 00:05:06.286 "accel_error_inject_error", 00:05:06.286 "ioat_scan_accel_module", 00:05:06.286 "dsa_scan_accel_module", 00:05:06.286 "iaa_scan_accel_module", 00:05:06.286 "vfu_virtio_create_fs_endpoint", 00:05:06.286 "vfu_virtio_create_scsi_endpoint", 00:05:06.286 "vfu_virtio_scsi_remove_target", 00:05:06.286 "vfu_virtio_scsi_add_target", 00:05:06.286 "vfu_virtio_create_blk_endpoint", 00:05:06.286 "vfu_virtio_delete_endpoint", 00:05:06.286 "keyring_file_remove_key", 00:05:06.286 "keyring_file_add_key", 00:05:06.286 "keyring_linux_set_options", 00:05:06.286 "fsdev_aio_delete", 00:05:06.286 "fsdev_aio_create", 00:05:06.286 "iscsi_get_histogram", 00:05:06.286 "iscsi_enable_histogram", 00:05:06.286 "iscsi_set_options", 00:05:06.286 "iscsi_get_auth_groups", 00:05:06.286 "iscsi_auth_group_remove_secret", 00:05:06.286 "iscsi_auth_group_add_secret", 00:05:06.286 "iscsi_delete_auth_group", 00:05:06.286 "iscsi_create_auth_group", 00:05:06.286 "iscsi_set_discovery_auth", 00:05:06.286 "iscsi_get_options", 00:05:06.286 "iscsi_target_node_request_logout", 00:05:06.286 "iscsi_target_node_set_redirect", 00:05:06.286 "iscsi_target_node_set_auth", 00:05:06.286 "iscsi_target_node_add_lun", 00:05:06.286 "iscsi_get_stats", 00:05:06.286 "iscsi_get_connections", 00:05:06.286 "iscsi_portal_group_set_auth", 00:05:06.286 "iscsi_start_portal_group", 00:05:06.286 "iscsi_delete_portal_group", 00:05:06.286 "iscsi_create_portal_group", 00:05:06.286 "iscsi_get_portal_groups", 00:05:06.286 "iscsi_delete_target_node", 00:05:06.286 "iscsi_target_node_remove_pg_ig_maps", 00:05:06.286 "iscsi_target_node_add_pg_ig_maps", 00:05:06.286 "iscsi_create_target_node", 00:05:06.286 "iscsi_get_target_nodes", 00:05:06.286 "iscsi_delete_initiator_group", 00:05:06.286 "iscsi_initiator_group_remove_initiators", 00:05:06.286 "iscsi_initiator_group_add_initiators", 00:05:06.286 "iscsi_create_initiator_group", 00:05:06.286 "iscsi_get_initiator_groups", 00:05:06.286 "nvmf_set_crdt", 00:05:06.286 "nvmf_set_config", 00:05:06.286 "nvmf_set_max_subsystems", 00:05:06.286 "nvmf_stop_mdns_prr", 00:05:06.286 "nvmf_publish_mdns_prr", 00:05:06.286 "nvmf_subsystem_get_listeners", 00:05:06.286 "nvmf_subsystem_get_qpairs", 00:05:06.286 "nvmf_subsystem_get_controllers", 00:05:06.286 "nvmf_get_stats", 00:05:06.286 "nvmf_get_transports", 00:05:06.286 "nvmf_create_transport", 00:05:06.286 "nvmf_get_targets", 00:05:06.286 "nvmf_delete_target", 00:05:06.286 "nvmf_create_target", 00:05:06.286 "nvmf_subsystem_allow_any_host", 00:05:06.287 "nvmf_subsystem_set_keys", 00:05:06.287 "nvmf_subsystem_remove_host", 00:05:06.287 "nvmf_subsystem_add_host", 00:05:06.287 "nvmf_ns_remove_host", 00:05:06.287 "nvmf_ns_add_host", 00:05:06.287 "nvmf_subsystem_remove_ns", 00:05:06.287 "nvmf_subsystem_set_ns_ana_group", 00:05:06.287 "nvmf_subsystem_add_ns", 00:05:06.287 "nvmf_subsystem_listener_set_ana_state", 00:05:06.287 "nvmf_discovery_get_referrals", 00:05:06.287 "nvmf_discovery_remove_referral", 00:05:06.287 "nvmf_discovery_add_referral", 00:05:06.287 "nvmf_subsystem_remove_listener", 00:05:06.287 "nvmf_subsystem_add_listener", 00:05:06.287 "nvmf_delete_subsystem", 00:05:06.287 "nvmf_create_subsystem", 00:05:06.287 "nvmf_get_subsystems", 00:05:06.287 "env_dpdk_get_mem_stats", 00:05:06.287 "nbd_get_disks", 00:05:06.287 "nbd_stop_disk", 00:05:06.287 "nbd_start_disk", 00:05:06.287 "ublk_recover_disk", 00:05:06.287 "ublk_get_disks", 00:05:06.287 "ublk_stop_disk", 00:05:06.287 "ublk_start_disk", 00:05:06.287 "ublk_destroy_target", 00:05:06.287 "ublk_create_target", 00:05:06.287 "virtio_blk_create_transport", 00:05:06.287 "virtio_blk_get_transports", 00:05:06.287 "vhost_controller_set_coalescing", 00:05:06.287 "vhost_get_controllers", 00:05:06.287 "vhost_delete_controller", 00:05:06.287 "vhost_create_blk_controller", 00:05:06.287 "vhost_scsi_controller_remove_target", 00:05:06.287 "vhost_scsi_controller_add_target", 00:05:06.287 "vhost_start_scsi_controller", 00:05:06.287 "vhost_create_scsi_controller", 00:05:06.287 "thread_set_cpumask", 00:05:06.287 "scheduler_set_options", 00:05:06.287 "framework_get_governor", 00:05:06.287 "framework_get_scheduler", 00:05:06.287 "framework_set_scheduler", 00:05:06.287 "framework_get_reactors", 00:05:06.287 "thread_get_io_channels", 00:05:06.287 "thread_get_pollers", 00:05:06.287 "thread_get_stats", 00:05:06.287 "framework_monitor_context_switch", 00:05:06.287 "spdk_kill_instance", 00:05:06.287 "log_enable_timestamps", 00:05:06.287 "log_get_flags", 00:05:06.287 "log_clear_flag", 00:05:06.287 "log_set_flag", 00:05:06.287 "log_get_level", 00:05:06.287 "log_set_level", 00:05:06.287 "log_get_print_level", 00:05:06.287 "log_set_print_level", 00:05:06.287 "framework_enable_cpumask_locks", 00:05:06.287 "framework_disable_cpumask_locks", 00:05:06.287 "framework_wait_init", 00:05:06.287 "framework_start_init", 00:05:06.287 "scsi_get_devices", 00:05:06.287 "bdev_get_histogram", 00:05:06.287 "bdev_enable_histogram", 00:05:06.287 "bdev_set_qos_limit", 00:05:06.287 "bdev_set_qd_sampling_period", 00:05:06.287 "bdev_get_bdevs", 00:05:06.287 "bdev_reset_iostat", 00:05:06.287 "bdev_get_iostat", 00:05:06.287 "bdev_examine", 00:05:06.287 "bdev_wait_for_examine", 00:05:06.287 "bdev_set_options", 00:05:06.287 "accel_get_stats", 00:05:06.287 "accel_set_options", 00:05:06.287 "accel_set_driver", 00:05:06.287 "accel_crypto_key_destroy", 00:05:06.287 "accel_crypto_keys_get", 00:05:06.287 "accel_crypto_key_create", 00:05:06.287 "accel_assign_opc", 00:05:06.287 "accel_get_module_info", 00:05:06.287 "accel_get_opc_assignments", 00:05:06.287 "vmd_rescan", 00:05:06.287 "vmd_remove_device", 00:05:06.287 "vmd_enable", 00:05:06.287 "sock_get_default_impl", 00:05:06.287 "sock_set_default_impl", 00:05:06.287 "sock_impl_set_options", 00:05:06.287 "sock_impl_get_options", 00:05:06.287 "iobuf_get_stats", 00:05:06.287 "iobuf_set_options", 00:05:06.287 "keyring_get_keys", 00:05:06.287 "vfu_tgt_set_base_path", 00:05:06.287 "framework_get_pci_devices", 00:05:06.287 "framework_get_config", 00:05:06.287 "framework_get_subsystems", 00:05:06.287 "fsdev_set_opts", 00:05:06.287 "fsdev_get_opts", 00:05:06.287 "trace_get_info", 00:05:06.287 "trace_get_tpoint_group_mask", 00:05:06.287 "trace_disable_tpoint_group", 00:05:06.287 "trace_enable_tpoint_group", 00:05:06.287 "trace_clear_tpoint_mask", 00:05:06.287 "trace_set_tpoint_mask", 00:05:06.287 "notify_get_notifications", 00:05:06.287 "notify_get_types", 00:05:06.287 "spdk_get_version", 00:05:06.287 "rpc_get_methods" 00:05:06.287 ] 00:05:06.287 12:26:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:06.287 12:26:46 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.287 12:26:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.287 12:26:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:06.287 12:26:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 899196 00:05:06.287 12:26:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 899196 ']' 00:05:06.287 12:26:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 899196 00:05:06.287 12:26:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:06.287 12:26:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.287 12:26:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 899196 00:05:06.287 12:26:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.287 12:26:46 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.287 12:26:46 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 899196' 00:05:06.287 killing process with pid 899196 00:05:06.287 12:26:46 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 899196 00:05:06.287 12:26:46 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 899196 00:05:06.853 00:05:06.853 real 0m1.339s 00:05:06.853 user 0m2.387s 00:05:06.853 sys 0m0.477s 00:05:06.853 12:26:46 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.853 12:26:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.853 ************************************ 00:05:06.853 END TEST spdkcli_tcp 00:05:06.853 ************************************ 00:05:06.853 12:26:46 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.853 12:26:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.853 12:26:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.853 12:26:46 -- common/autotest_common.sh@10 -- # set +x 00:05:06.853 ************************************ 00:05:06.853 START TEST dpdk_mem_utility 00:05:06.853 ************************************ 00:05:06.853 12:26:46 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.853 * Looking for test storage... 00:05:06.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:06.853 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.853 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.853 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.853 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.853 12:26:47 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:06.854 12:26:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:06.854 12:26:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.854 12:26:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:06.854 12:26:47 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.854 12:26:47 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.854 12:26:47 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.854 12:26:47 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:06.854 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.854 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.854 --rc genhtml_branch_coverage=1 00:05:06.854 --rc genhtml_function_coverage=1 00:05:06.854 --rc genhtml_legend=1 00:05:06.854 --rc geninfo_all_blocks=1 00:05:06.854 --rc geninfo_unexecuted_blocks=1 00:05:06.854 00:05:06.854 ' 00:05:06.854 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.854 --rc genhtml_branch_coverage=1 00:05:06.854 --rc genhtml_function_coverage=1 00:05:06.854 --rc genhtml_legend=1 00:05:06.854 --rc geninfo_all_blocks=1 00:05:06.854 --rc geninfo_unexecuted_blocks=1 00:05:06.854 00:05:06.854 ' 00:05:06.854 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.854 --rc genhtml_branch_coverage=1 00:05:06.854 --rc genhtml_function_coverage=1 00:05:06.854 --rc genhtml_legend=1 00:05:06.854 --rc geninfo_all_blocks=1 00:05:06.854 --rc geninfo_unexecuted_blocks=1 00:05:06.854 00:05:06.854 ' 00:05:06.854 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.854 --rc genhtml_branch_coverage=1 00:05:06.854 --rc genhtml_function_coverage=1 00:05:06.854 --rc genhtml_legend=1 00:05:06.854 --rc geninfo_all_blocks=1 00:05:06.854 --rc geninfo_unexecuted_blocks=1 00:05:06.854 00:05:06.854 ' 00:05:06.854 12:26:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:06.854 12:26:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=899440 00:05:06.854 12:26:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 899440 00:05:06.854 12:26:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.854 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 899440 ']' 00:05:06.854 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.854 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.854 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.854 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.854 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:06.854 [2024-11-15 12:26:47.191083] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:06.854 [2024-11-15 12:26:47.191172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid899440 ] 00:05:07.112 [2024-11-15 12:26:47.257399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.112 [2024-11-15 12:26:47.315545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.371 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.371 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:07.371 12:26:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:07.371 12:26:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:07.371 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.371 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.371 { 00:05:07.371 "filename": "/tmp/spdk_mem_dump.txt" 00:05:07.371 } 00:05:07.371 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.371 12:26:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:07.371 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:07.371 1 heaps totaling size 810.000000 MiB 00:05:07.371 size: 810.000000 MiB heap id: 0 00:05:07.371 end heaps---------- 00:05:07.371 9 mempools totaling size 595.772034 MiB 00:05:07.371 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:07.371 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:07.371 size: 92.545471 MiB name: bdev_io_899440 00:05:07.371 size: 50.003479 MiB name: msgpool_899440 00:05:07.371 size: 36.509338 MiB name: fsdev_io_899440 00:05:07.371 size: 21.763794 MiB name: PDU_Pool 00:05:07.371 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:07.371 size: 4.133484 MiB name: evtpool_899440 00:05:07.371 size: 0.026123 MiB name: Session_Pool 00:05:07.371 end mempools------- 00:05:07.371 6 memzones totaling size 4.142822 MiB 00:05:07.371 size: 1.000366 MiB name: RG_ring_0_899440 00:05:07.371 size: 1.000366 MiB name: RG_ring_1_899440 00:05:07.371 size: 1.000366 MiB name: RG_ring_4_899440 00:05:07.371 size: 1.000366 MiB name: RG_ring_5_899440 00:05:07.371 size: 0.125366 MiB name: RG_ring_2_899440 00:05:07.371 size: 0.015991 MiB name: RG_ring_3_899440 00:05:07.371 end memzones------- 00:05:07.371 12:26:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:07.371 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:07.371 list of free elements. size: 10.862488 MiB 00:05:07.371 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:07.371 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:07.371 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:07.371 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:07.371 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:07.371 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:07.371 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:07.371 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:07.371 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:07.371 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:07.371 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:07.371 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:07.371 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:07.371 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:07.371 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:07.371 list of standard malloc elements. size: 199.218628 MiB 00:05:07.371 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:07.372 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:07.372 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:07.372 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:07.372 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:07.372 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:07.372 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:07.372 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:07.372 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:07.372 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:07.372 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:07.372 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:07.372 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:07.372 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:07.372 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:07.372 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:07.372 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:07.372 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:07.372 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:07.372 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:07.372 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:07.372 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:07.372 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:07.372 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:07.372 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:07.372 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:07.372 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:07.372 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:07.372 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:07.372 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:07.372 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:07.372 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:07.372 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:07.372 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:07.372 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:07.372 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:07.372 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:07.372 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:07.372 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:07.372 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:07.372 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:07.372 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:07.372 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:07.372 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:07.372 list of memzone associated elements. size: 599.918884 MiB 00:05:07.372 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:07.372 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:07.372 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:07.372 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:07.372 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:07.372 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_899440_0 00:05:07.372 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:07.372 associated memzone info: size: 48.002930 MiB name: MP_msgpool_899440_0 00:05:07.372 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:07.372 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_899440_0 00:05:07.372 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:07.372 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:07.372 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:07.372 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:07.372 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:07.372 associated memzone info: size: 3.000122 MiB name: MP_evtpool_899440_0 00:05:07.372 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:07.372 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_899440 00:05:07.372 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:07.372 associated memzone info: size: 1.007996 MiB name: MP_evtpool_899440 00:05:07.372 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:07.372 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:07.372 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:07.372 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:07.372 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:07.372 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:07.372 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:07.372 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:07.372 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:07.372 associated memzone info: size: 1.000366 MiB name: RG_ring_0_899440 00:05:07.372 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:07.372 associated memzone info: size: 1.000366 MiB name: RG_ring_1_899440 00:05:07.372 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:07.372 associated memzone info: size: 1.000366 MiB name: RG_ring_4_899440 00:05:07.372 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:07.372 associated memzone info: size: 1.000366 MiB name: RG_ring_5_899440 00:05:07.372 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:07.372 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_899440 00:05:07.372 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:07.372 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_899440 00:05:07.372 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:07.372 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:07.372 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:07.372 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:07.372 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:07.372 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:07.372 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:07.372 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_899440 00:05:07.372 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:07.372 associated memzone info: size: 0.125366 MiB name: RG_ring_2_899440 00:05:07.372 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:07.372 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:07.372 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:07.372 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:07.372 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:07.372 associated memzone info: size: 0.015991 MiB name: RG_ring_3_899440 00:05:07.372 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:07.372 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:07.372 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:07.372 associated memzone info: size: 0.000183 MiB name: MP_msgpool_899440 00:05:07.372 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:07.372 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_899440 00:05:07.372 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:07.372 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_899440 00:05:07.372 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:07.372 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:07.372 12:26:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:07.372 12:26:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 899440 00:05:07.372 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 899440 ']' 00:05:07.372 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 899440 00:05:07.372 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:07.372 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.372 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 899440 00:05:07.630 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.630 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.630 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 899440' 00:05:07.630 killing process with pid 899440 00:05:07.630 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 899440 00:05:07.630 12:26:47 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 899440 00:05:07.889 00:05:07.889 real 0m1.157s 00:05:07.889 user 0m1.152s 00:05:07.889 sys 0m0.412s 00:05:07.889 12:26:48 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.889 12:26:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.889 ************************************ 00:05:07.889 END TEST dpdk_mem_utility 00:05:07.889 ************************************ 00:05:07.889 12:26:48 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:07.889 12:26:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.889 12:26:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.889 12:26:48 -- common/autotest_common.sh@10 -- # set +x 00:05:07.889 ************************************ 00:05:07.889 START TEST event 00:05:07.889 ************************************ 00:05:07.889 12:26:48 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:08.147 * Looking for test storage... 00:05:08.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:08.147 12:26:48 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.147 12:26:48 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.147 12:26:48 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.147 12:26:48 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.147 12:26:48 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.147 12:26:48 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.147 12:26:48 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.147 12:26:48 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.147 12:26:48 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.147 12:26:48 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.147 12:26:48 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.147 12:26:48 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.147 12:26:48 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.147 12:26:48 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.147 12:26:48 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.147 12:26:48 event -- scripts/common.sh@344 -- # case "$op" in 00:05:08.147 12:26:48 event -- scripts/common.sh@345 -- # : 1 00:05:08.147 12:26:48 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.147 12:26:48 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.147 12:26:48 event -- scripts/common.sh@365 -- # decimal 1 00:05:08.147 12:26:48 event -- scripts/common.sh@353 -- # local d=1 00:05:08.147 12:26:48 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.147 12:26:48 event -- scripts/common.sh@355 -- # echo 1 00:05:08.147 12:26:48 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.147 12:26:48 event -- scripts/common.sh@366 -- # decimal 2 00:05:08.147 12:26:48 event -- scripts/common.sh@353 -- # local d=2 00:05:08.147 12:26:48 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.147 12:26:48 event -- scripts/common.sh@355 -- # echo 2 00:05:08.147 12:26:48 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.147 12:26:48 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.147 12:26:48 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.147 12:26:48 event -- scripts/common.sh@368 -- # return 0 00:05:08.147 12:26:48 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.147 12:26:48 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.147 --rc genhtml_branch_coverage=1 00:05:08.147 --rc genhtml_function_coverage=1 00:05:08.147 --rc genhtml_legend=1 00:05:08.147 --rc geninfo_all_blocks=1 00:05:08.147 --rc geninfo_unexecuted_blocks=1 00:05:08.147 00:05:08.147 ' 00:05:08.147 12:26:48 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.147 --rc genhtml_branch_coverage=1 00:05:08.147 --rc genhtml_function_coverage=1 00:05:08.147 --rc genhtml_legend=1 00:05:08.147 --rc geninfo_all_blocks=1 00:05:08.147 --rc geninfo_unexecuted_blocks=1 00:05:08.147 00:05:08.147 ' 00:05:08.147 12:26:48 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.147 --rc genhtml_branch_coverage=1 00:05:08.147 --rc genhtml_function_coverage=1 00:05:08.147 --rc genhtml_legend=1 00:05:08.147 --rc geninfo_all_blocks=1 00:05:08.147 --rc geninfo_unexecuted_blocks=1 00:05:08.147 00:05:08.147 ' 00:05:08.147 12:26:48 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.147 --rc genhtml_branch_coverage=1 00:05:08.147 --rc genhtml_function_coverage=1 00:05:08.147 --rc genhtml_legend=1 00:05:08.147 --rc geninfo_all_blocks=1 00:05:08.147 --rc geninfo_unexecuted_blocks=1 00:05:08.147 00:05:08.147 ' 00:05:08.147 12:26:48 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:08.147 12:26:48 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:08.147 12:26:48 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.147 12:26:48 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:08.147 12:26:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.147 12:26:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.147 ************************************ 00:05:08.147 START TEST event_perf 00:05:08.147 ************************************ 00:05:08.147 12:26:48 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.147 Running I/O for 1 seconds...[2024-11-15 12:26:48.384465] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:08.147 [2024-11-15 12:26:48.384529] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid899723 ] 00:05:08.147 [2024-11-15 12:26:48.455227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.406 [2024-11-15 12:26:48.519875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.406 [2024-11-15 12:26:48.519938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.406 [2024-11-15 12:26:48.520002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.406 [2024-11-15 12:26:48.520005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.339 Running I/O for 1 seconds... 00:05:09.339 lcore 0: 232076 00:05:09.339 lcore 1: 232074 00:05:09.339 lcore 2: 232074 00:05:09.339 lcore 3: 232075 00:05:09.339 done. 00:05:09.339 00:05:09.339 real 0m1.215s 00:05:09.339 user 0m4.130s 00:05:09.339 sys 0m0.078s 00:05:09.339 12:26:49 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.339 12:26:49 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.339 ************************************ 00:05:09.339 END TEST event_perf 00:05:09.339 ************************************ 00:05:09.339 12:26:49 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:09.339 12:26:49 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:09.339 12:26:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.339 12:26:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.339 ************************************ 00:05:09.339 START TEST event_reactor 00:05:09.339 ************************************ 00:05:09.339 12:26:49 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:09.339 [2024-11-15 12:26:49.651783] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:09.339 [2024-11-15 12:26:49.651845] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid899879 ] 00:05:09.597 [2024-11-15 12:26:49.717415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.597 [2024-11-15 12:26:49.771801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.533 test_start 00:05:10.533 oneshot 00:05:10.533 tick 100 00:05:10.533 tick 100 00:05:10.533 tick 250 00:05:10.533 tick 100 00:05:10.533 tick 100 00:05:10.533 tick 100 00:05:10.533 tick 250 00:05:10.533 tick 500 00:05:10.533 tick 100 00:05:10.533 tick 100 00:05:10.533 tick 250 00:05:10.533 tick 100 00:05:10.533 tick 100 00:05:10.533 test_end 00:05:10.533 00:05:10.533 real 0m1.197s 00:05:10.533 user 0m1.128s 00:05:10.533 sys 0m0.065s 00:05:10.533 12:26:50 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.533 12:26:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:10.533 ************************************ 00:05:10.533 END TEST event_reactor 00:05:10.533 ************************************ 00:05:10.533 12:26:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.533 12:26:50 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:10.533 12:26:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.533 12:26:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.791 ************************************ 00:05:10.791 START TEST event_reactor_perf 00:05:10.791 ************************************ 00:05:10.791 12:26:50 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.791 [2024-11-15 12:26:50.901450] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:10.791 [2024-11-15 12:26:50.901514] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900039 ] 00:05:10.791 [2024-11-15 12:26:50.969767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.791 [2024-11-15 12:26:51.022922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.166 test_start 00:05:12.166 test_end 00:05:12.166 Performance: 448042 events per second 00:05:12.166 00:05:12.166 real 0m1.199s 00:05:12.166 user 0m1.140s 00:05:12.166 sys 0m0.055s 00:05:12.166 12:26:52 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.166 12:26:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:12.166 ************************************ 00:05:12.166 END TEST event_reactor_perf 00:05:12.166 ************************************ 00:05:12.166 12:26:52 event -- event/event.sh@49 -- # uname -s 00:05:12.166 12:26:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:12.166 12:26:52 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:12.166 12:26:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.166 12:26:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.166 12:26:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.166 ************************************ 00:05:12.166 START TEST event_scheduler 00:05:12.166 ************************************ 00:05:12.166 12:26:52 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:12.166 * Looking for test storage... 00:05:12.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:12.166 12:26:52 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.166 12:26:52 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.166 12:26:52 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.166 12:26:52 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.166 12:26:52 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:12.166 12:26:52 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.166 12:26:52 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.166 --rc genhtml_branch_coverage=1 00:05:12.166 --rc genhtml_function_coverage=1 00:05:12.166 --rc genhtml_legend=1 00:05:12.166 --rc geninfo_all_blocks=1 00:05:12.166 --rc geninfo_unexecuted_blocks=1 00:05:12.166 00:05:12.166 ' 00:05:12.166 12:26:52 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.166 --rc genhtml_branch_coverage=1 00:05:12.166 --rc genhtml_function_coverage=1 00:05:12.166 --rc genhtml_legend=1 00:05:12.166 --rc geninfo_all_blocks=1 00:05:12.166 --rc geninfo_unexecuted_blocks=1 00:05:12.166 00:05:12.166 ' 00:05:12.166 12:26:52 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.166 --rc genhtml_branch_coverage=1 00:05:12.166 --rc genhtml_function_coverage=1 00:05:12.166 --rc genhtml_legend=1 00:05:12.166 --rc geninfo_all_blocks=1 00:05:12.166 --rc geninfo_unexecuted_blocks=1 00:05:12.166 00:05:12.166 ' 00:05:12.166 12:26:52 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.166 --rc genhtml_branch_coverage=1 00:05:12.166 --rc genhtml_function_coverage=1 00:05:12.166 --rc genhtml_legend=1 00:05:12.166 --rc geninfo_all_blocks=1 00:05:12.166 --rc geninfo_unexecuted_blocks=1 00:05:12.166 00:05:12.167 ' 00:05:12.167 12:26:52 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:12.167 12:26:52 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=900229 00:05:12.167 12:26:52 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:12.167 12:26:52 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.167 12:26:52 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 900229 00:05:12.167 12:26:52 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 900229 ']' 00:05:12.167 12:26:52 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.167 12:26:52 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.167 12:26:52 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.167 12:26:52 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.167 12:26:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.167 [2024-11-15 12:26:52.322917] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:12.167 [2024-11-15 12:26:52.322998] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900229 ] 00:05:12.167 [2024-11-15 12:26:52.387078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.167 [2024-11-15 12:26:52.447853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.167 [2024-11-15 12:26:52.447905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.167 [2024-11-15 12:26:52.447972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.167 [2024-11-15 12:26:52.447975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.425 12:26:52 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.425 12:26:52 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:12.425 12:26:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:12.425 12:26:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.425 12:26:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.425 [2024-11-15 12:26:52.548834] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:12.425 [2024-11-15 12:26:52.548861] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:12.425 [2024-11-15 12:26:52.548880] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:12.425 [2024-11-15 12:26:52.548891] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:12.425 [2024-11-15 12:26:52.548902] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:12.425 12:26:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.425 12:26:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:12.425 12:26:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.425 12:26:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.425 [2024-11-15 12:26:52.647815] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:12.425 12:26:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.425 12:26:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:12.425 12:26:52 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.425 12:26:52 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.425 12:26:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.425 ************************************ 00:05:12.425 START TEST scheduler_create_thread 00:05:12.425 ************************************ 00:05:12.425 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:12.425 12:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:12.425 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.425 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.425 2 00:05:12.425 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.425 12:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:12.425 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.426 3 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.426 4 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.426 5 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.426 6 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.426 7 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.426 8 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.426 9 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.426 10 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.426 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.684 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.684 12:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:12.684 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.684 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.684 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.684 12:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:12.684 12:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:12.684 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.684 12:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.942 12:26:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.942 00:05:12.942 real 0m0.589s 00:05:12.942 user 0m0.012s 00:05:12.942 sys 0m0.002s 00:05:12.942 12:26:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.942 12:26:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.942 ************************************ 00:05:12.942 END TEST scheduler_create_thread 00:05:12.942 ************************************ 00:05:13.200 12:26:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:13.200 12:26:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 900229 00:05:13.200 12:26:53 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 900229 ']' 00:05:13.200 12:26:53 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 900229 00:05:13.200 12:26:53 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:13.200 12:26:53 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.200 12:26:53 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 900229 00:05:13.200 12:26:53 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:13.200 12:26:53 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:13.200 12:26:53 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 900229' 00:05:13.200 killing process with pid 900229 00:05:13.200 12:26:53 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 900229 00:05:13.200 12:26:53 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 900229 00:05:13.458 [2024-11-15 12:26:53.747892] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:13.719 00:05:13.719 real 0m1.814s 00:05:13.719 user 0m2.461s 00:05:13.719 sys 0m0.340s 00:05:13.719 12:26:53 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.719 12:26:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.719 ************************************ 00:05:13.719 END TEST event_scheduler 00:05:13.719 ************************************ 00:05:13.719 12:26:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:13.719 12:26:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:13.719 12:26:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.719 12:26:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.719 12:26:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.719 ************************************ 00:05:13.719 START TEST app_repeat 00:05:13.719 ************************************ 00:05:13.719 12:26:54 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:13.719 12:26:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.719 12:26:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.719 12:26:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:13.719 12:26:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.719 12:26:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:13.719 12:26:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:13.719 12:26:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:13.719 12:26:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=900541 00:05:13.719 12:26:54 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:13.719 12:26:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.719 12:26:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 900541' 00:05:13.719 Process app_repeat pid: 900541 00:05:13.719 12:26:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.719 12:26:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:13.719 spdk_app_start Round 0 00:05:13.719 12:26:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 900541 /var/tmp/spdk-nbd.sock 00:05:13.719 12:26:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 900541 ']' 00:05:13.719 12:26:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.719 12:26:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.719 12:26:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.719 12:26:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.719 12:26:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.719 [2024-11-15 12:26:54.037262] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:13.719 [2024-11-15 12:26:54.037327] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900541 ] 00:05:13.978 [2024-11-15 12:26:54.103028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.978 [2024-11-15 12:26:54.158121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.978 [2024-11-15 12:26:54.158124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.978 12:26:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.978 12:26:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:13.978 12:26:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.236 Malloc0 00:05:14.236 12:26:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.803 Malloc1 00:05:14.803 12:26:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.803 12:26:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.803 12:26:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.803 12:26:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.803 12:26:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.803 12:26:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.803 12:26:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.803 12:26:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.803 12:26:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.803 12:26:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.803 12:26:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.803 12:26:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.803 12:26:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.803 12:26:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.803 12:26:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.803 12:26:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.061 /dev/nbd0 00:05:15.061 12:26:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.061 12:26:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.061 12:26:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:15.061 12:26:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.061 12:26:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.061 12:26:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.061 12:26:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:15.061 12:26:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.061 12:26:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.061 12:26:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.061 12:26:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.061 1+0 records in 00:05:15.061 1+0 records out 00:05:15.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195587 s, 20.9 MB/s 00:05:15.061 12:26:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.061 12:26:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.061 12:26:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.061 12:26:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.061 12:26:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.061 12:26:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.061 12:26:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.061 12:26:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.320 /dev/nbd1 00:05:15.320 12:26:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.320 12:26:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.320 12:26:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:15.320 12:26:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.320 12:26:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.320 12:26:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.320 12:26:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:15.320 12:26:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.320 12:26:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.320 12:26:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.320 12:26:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.320 1+0 records in 00:05:15.320 1+0 records out 00:05:15.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181847 s, 22.5 MB/s 00:05:15.320 12:26:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.320 12:26:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.320 12:26:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.320 12:26:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.320 12:26:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.320 12:26:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.320 12:26:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.320 12:26:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.320 12:26:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.320 12:26:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.578 { 00:05:15.578 "nbd_device": "/dev/nbd0", 00:05:15.578 "bdev_name": "Malloc0" 00:05:15.578 }, 00:05:15.578 { 00:05:15.578 "nbd_device": "/dev/nbd1", 00:05:15.578 "bdev_name": "Malloc1" 00:05:15.578 } 00:05:15.578 ]' 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.578 { 00:05:15.578 "nbd_device": "/dev/nbd0", 00:05:15.578 "bdev_name": "Malloc0" 00:05:15.578 }, 00:05:15.578 { 00:05:15.578 "nbd_device": "/dev/nbd1", 00:05:15.578 "bdev_name": "Malloc1" 00:05:15.578 } 00:05:15.578 ]' 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.578 /dev/nbd1' 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.578 /dev/nbd1' 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.578 256+0 records in 00:05:15.578 256+0 records out 00:05:15.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514584 s, 204 MB/s 00:05:15.578 12:26:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.579 256+0 records in 00:05:15.579 256+0 records out 00:05:15.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198137 s, 52.9 MB/s 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.579 256+0 records in 00:05:15.579 256+0 records out 00:05:15.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022208 s, 47.2 MB/s 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.579 12:26:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.145 12:26:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.403 12:26:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.403 12:26:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.403 12:26:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.403 12:26:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.403 12:26:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.660 12:26:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.660 12:26:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.660 12:26:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.660 12:26:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.660 12:26:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.660 12:26:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.660 12:26:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.660 12:26:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.660 12:26:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.660 12:26:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.660 12:26:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.660 12:26:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.660 12:26:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.917 12:26:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.176 [2024-11-15 12:26:57.289187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.176 [2024-11-15 12:26:57.343943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.176 [2024-11-15 12:26:57.343944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.176 [2024-11-15 12:26:57.401989] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.176 [2024-11-15 12:26:57.402072] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.457 12:27:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.457 12:27:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:20.457 spdk_app_start Round 1 00:05:20.457 12:27:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 900541 /var/tmp/spdk-nbd.sock 00:05:20.457 12:27:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 900541 ']' 00:05:20.457 12:27:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.457 12:27:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.457 12:27:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.457 12:27:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.457 12:27:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.457 12:27:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.457 12:27:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:20.457 12:27:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.457 Malloc0 00:05:20.457 12:27:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.714 Malloc1 00:05:20.714 12:27:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.714 12:27:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.714 12:27:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.714 12:27:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.714 12:27:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.714 12:27:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.714 12:27:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.714 12:27:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.715 12:27:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.715 12:27:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.715 12:27:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.715 12:27:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.715 12:27:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.715 12:27:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.715 12:27:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.715 12:27:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.972 /dev/nbd0 00:05:20.972 12:27:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.972 12:27:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.972 12:27:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:20.972 12:27:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.972 12:27:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.972 12:27:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.972 12:27:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:20.972 12:27:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.972 12:27:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.972 12:27:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.972 12:27:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.263 1+0 records in 00:05:21.263 1+0 records out 00:05:21.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155825 s, 26.3 MB/s 00:05:21.263 12:27:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.263 12:27:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.263 12:27:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.263 12:27:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.263 12:27:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.263 12:27:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.263 12:27:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.263 12:27:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.572 /dev/nbd1 00:05:21.572 12:27:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.572 12:27:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.572 12:27:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:21.572 12:27:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.572 12:27:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.572 12:27:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.572 12:27:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:21.572 12:27:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.572 12:27:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.572 12:27:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.572 12:27:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.572 1+0 records in 00:05:21.572 1+0 records out 00:05:21.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203212 s, 20.2 MB/s 00:05:21.572 12:27:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.572 12:27:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.572 12:27:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.572 12:27:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.572 12:27:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.572 12:27:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.572 12:27:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.572 12:27:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.572 12:27:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.572 12:27:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:21.852 { 00:05:21.852 "nbd_device": "/dev/nbd0", 00:05:21.852 "bdev_name": "Malloc0" 00:05:21.852 }, 00:05:21.852 { 00:05:21.852 "nbd_device": "/dev/nbd1", 00:05:21.852 "bdev_name": "Malloc1" 00:05:21.852 } 00:05:21.852 ]' 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:21.852 { 00:05:21.852 "nbd_device": "/dev/nbd0", 00:05:21.852 "bdev_name": "Malloc0" 00:05:21.852 }, 00:05:21.852 { 00:05:21.852 "nbd_device": "/dev/nbd1", 00:05:21.852 "bdev_name": "Malloc1" 00:05:21.852 } 00:05:21.852 ]' 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:21.852 /dev/nbd1' 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:21.852 /dev/nbd1' 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:21.852 256+0 records in 00:05:21.852 256+0 records out 00:05:21.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00510075 s, 206 MB/s 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.852 12:27:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:21.852 256+0 records in 00:05:21.852 256+0 records out 00:05:21.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201684 s, 52.0 MB/s 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:21.852 256+0 records in 00:05:21.852 256+0 records out 00:05:21.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022338 s, 46.9 MB/s 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.852 12:27:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.110 12:27:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.110 12:27:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.110 12:27:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.110 12:27:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.110 12:27:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.110 12:27:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.110 12:27:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.110 12:27:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.110 12:27:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.110 12:27:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.368 12:27:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.368 12:27:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.368 12:27:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.368 12:27:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.368 12:27:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.368 12:27:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.368 12:27:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.368 12:27:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.368 12:27:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.368 12:27:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.368 12:27:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.626 12:27:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.626 12:27:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.626 12:27:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.883 12:27:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.883 12:27:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.883 12:27:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.883 12:27:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:22.883 12:27:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.883 12:27:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.883 12:27:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.883 12:27:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.883 12:27:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.883 12:27:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.142 12:27:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.400 [2024-11-15 12:27:03.503477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.400 [2024-11-15 12:27:03.556909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.400 [2024-11-15 12:27:03.556909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.400 [2024-11-15 12:27:03.616301] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.400 [2024-11-15 12:27:03.616370] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.686 12:27:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.686 12:27:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:26.686 spdk_app_start Round 2 00:05:26.686 12:27:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 900541 /var/tmp/spdk-nbd.sock 00:05:26.686 12:27:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 900541 ']' 00:05:26.686 12:27:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.686 12:27:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.686 12:27:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.686 12:27:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.686 12:27:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.686 12:27:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.686 12:27:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:26.686 12:27:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.686 Malloc0 00:05:26.686 12:27:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.944 Malloc1 00:05:26.944 12:27:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.944 12:27:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.944 12:27:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.944 12:27:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.944 12:27:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.944 12:27:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.944 12:27:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.944 12:27:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.944 12:27:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.944 12:27:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.944 12:27:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.944 12:27:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.944 12:27:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.944 12:27:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.944 12:27:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.944 12:27:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.202 /dev/nbd0 00:05:27.202 12:27:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.202 12:27:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.202 12:27:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:27.202 12:27:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.202 12:27:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.202 12:27:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.202 12:27:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:27.202 12:27:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.202 12:27:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.202 12:27:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.202 12:27:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.202 1+0 records in 00:05:27.202 1+0 records out 00:05:27.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208541 s, 19.6 MB/s 00:05:27.202 12:27:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.202 12:27:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.202 12:27:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.202 12:27:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.202 12:27:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.202 12:27:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.202 12:27:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.202 12:27:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.769 /dev/nbd1 00:05:27.769 12:27:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.769 12:27:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.769 12:27:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:27.769 12:27:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.769 12:27:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.769 12:27:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.769 12:27:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:27.769 12:27:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.769 12:27:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.769 12:27:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.769 12:27:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.769 1+0 records in 00:05:27.769 1+0 records out 00:05:27.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229157 s, 17.9 MB/s 00:05:27.769 12:27:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.769 12:27:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.769 12:27:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.769 12:27:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.769 12:27:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.769 12:27:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.769 12:27:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.769 12:27:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.769 12:27:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.769 12:27:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.028 { 00:05:28.028 "nbd_device": "/dev/nbd0", 00:05:28.028 "bdev_name": "Malloc0" 00:05:28.028 }, 00:05:28.028 { 00:05:28.028 "nbd_device": "/dev/nbd1", 00:05:28.028 "bdev_name": "Malloc1" 00:05:28.028 } 00:05:28.028 ]' 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.028 { 00:05:28.028 "nbd_device": "/dev/nbd0", 00:05:28.028 "bdev_name": "Malloc0" 00:05:28.028 }, 00:05:28.028 { 00:05:28.028 "nbd_device": "/dev/nbd1", 00:05:28.028 "bdev_name": "Malloc1" 00:05:28.028 } 00:05:28.028 ]' 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.028 /dev/nbd1' 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.028 /dev/nbd1' 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.028 256+0 records in 00:05:28.028 256+0 records out 00:05:28.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00403944 s, 260 MB/s 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.028 256+0 records in 00:05:28.028 256+0 records out 00:05:28.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217546 s, 48.2 MB/s 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.028 256+0 records in 00:05:28.028 256+0 records out 00:05:28.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022905 s, 45.8 MB/s 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.028 12:27:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.287 12:27:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.287 12:27:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.287 12:27:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.287 12:27:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.287 12:27:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.287 12:27:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.287 12:27:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.287 12:27:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.287 12:27:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.287 12:27:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.544 12:27:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.544 12:27:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.544 12:27:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.544 12:27:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.544 12:27:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.544 12:27:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.544 12:27:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.544 12:27:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.544 12:27:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.544 12:27:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.544 12:27:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.110 12:27:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.110 12:27:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.110 12:27:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.110 12:27:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.110 12:27:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.110 12:27:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.110 12:27:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.110 12:27:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.110 12:27:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.110 12:27:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.110 12:27:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.110 12:27:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.110 12:27:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.369 12:27:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:29.369 [2024-11-15 12:27:09.685510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.628 [2024-11-15 12:27:09.741158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.628 [2024-11-15 12:27:09.741161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.628 [2024-11-15 12:27:09.799540] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.628 [2024-11-15 12:27:09.799611] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.154 12:27:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 900541 /var/tmp/spdk-nbd.sock 00:05:32.154 12:27:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 900541 ']' 00:05:32.154 12:27:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.154 12:27:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.154 12:27:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.154 12:27:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.154 12:27:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.412 12:27:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.412 12:27:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:32.412 12:27:12 event.app_repeat -- event/event.sh@39 -- # killprocess 900541 00:05:32.412 12:27:12 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 900541 ']' 00:05:32.412 12:27:12 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 900541 00:05:32.412 12:27:12 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:32.670 12:27:12 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.670 12:27:12 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 900541 00:05:32.670 12:27:12 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.670 12:27:12 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.670 12:27:12 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 900541' 00:05:32.670 killing process with pid 900541 00:05:32.670 12:27:12 event.app_repeat -- common/autotest_common.sh@973 -- # kill 900541 00:05:32.671 12:27:12 event.app_repeat -- common/autotest_common.sh@978 -- # wait 900541 00:05:32.671 spdk_app_start is called in Round 0. 00:05:32.671 Shutdown signal received, stop current app iteration 00:05:32.671 Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 reinitialization... 00:05:32.671 spdk_app_start is called in Round 1. 00:05:32.671 Shutdown signal received, stop current app iteration 00:05:32.671 Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 reinitialization... 00:05:32.671 spdk_app_start is called in Round 2. 00:05:32.671 Shutdown signal received, stop current app iteration 00:05:32.671 Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 reinitialization... 00:05:32.671 spdk_app_start is called in Round 3. 00:05:32.671 Shutdown signal received, stop current app iteration 00:05:32.671 12:27:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:32.671 12:27:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:32.671 00:05:32.671 real 0m18.973s 00:05:32.671 user 0m42.165s 00:05:32.671 sys 0m3.221s 00:05:32.671 12:27:12 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.671 12:27:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.671 ************************************ 00:05:32.671 END TEST app_repeat 00:05:32.671 ************************************ 00:05:32.671 12:27:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:32.671 12:27:13 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:32.671 12:27:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.671 12:27:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.671 12:27:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.929 ************************************ 00:05:32.929 START TEST cpu_locks 00:05:32.929 ************************************ 00:05:32.929 12:27:13 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:32.929 * Looking for test storage... 00:05:32.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:32.929 12:27:13 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.929 12:27:13 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.929 12:27:13 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.929 12:27:13 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.929 12:27:13 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:32.929 12:27:13 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.929 12:27:13 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.929 --rc genhtml_branch_coverage=1 00:05:32.929 --rc genhtml_function_coverage=1 00:05:32.929 --rc genhtml_legend=1 00:05:32.929 --rc geninfo_all_blocks=1 00:05:32.929 --rc geninfo_unexecuted_blocks=1 00:05:32.929 00:05:32.929 ' 00:05:32.929 12:27:13 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.929 --rc genhtml_branch_coverage=1 00:05:32.929 --rc genhtml_function_coverage=1 00:05:32.929 --rc genhtml_legend=1 00:05:32.929 --rc geninfo_all_blocks=1 00:05:32.929 --rc geninfo_unexecuted_blocks=1 00:05:32.929 00:05:32.929 ' 00:05:32.929 12:27:13 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.929 --rc genhtml_branch_coverage=1 00:05:32.929 --rc genhtml_function_coverage=1 00:05:32.929 --rc genhtml_legend=1 00:05:32.929 --rc geninfo_all_blocks=1 00:05:32.930 --rc geninfo_unexecuted_blocks=1 00:05:32.930 00:05:32.930 ' 00:05:32.930 12:27:13 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.930 --rc genhtml_branch_coverage=1 00:05:32.930 --rc genhtml_function_coverage=1 00:05:32.930 --rc genhtml_legend=1 00:05:32.930 --rc geninfo_all_blocks=1 00:05:32.930 --rc geninfo_unexecuted_blocks=1 00:05:32.930 00:05:32.930 ' 00:05:32.930 12:27:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:32.930 12:27:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:32.930 12:27:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:32.930 12:27:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:32.930 12:27:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.930 12:27:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.930 12:27:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.930 ************************************ 00:05:32.930 START TEST default_locks 00:05:32.930 ************************************ 00:05:32.930 12:27:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:32.930 12:27:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=903657 00:05:32.930 12:27:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.930 12:27:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 903657 00:05:32.930 12:27:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 903657 ']' 00:05:32.930 12:27:13 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.930 12:27:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.930 12:27:13 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.930 12:27:13 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.930 12:27:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.930 [2024-11-15 12:27:13.257385] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:32.930 [2024-11-15 12:27:13.257484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903657 ] 00:05:33.188 [2024-11-15 12:27:13.322643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.188 [2024-11-15 12:27:13.377926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.446 12:27:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.446 12:27:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:33.446 12:27:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 903657 00:05:33.446 12:27:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 903657 00:05:33.446 12:27:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.704 lslocks: write error 00:05:33.704 12:27:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 903657 00:05:33.704 12:27:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 903657 ']' 00:05:33.704 12:27:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 903657 00:05:33.704 12:27:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:33.704 12:27:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.704 12:27:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 903657 00:05:33.704 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.704 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.704 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 903657' 00:05:33.704 killing process with pid 903657 00:05:33.704 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 903657 00:05:33.704 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 903657 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 903657 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 903657 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 903657 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 903657 ']' 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (903657) - No such process 00:05:34.271 ERROR: process (pid: 903657) is no longer running 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.271 00:05:34.271 real 0m1.215s 00:05:34.271 user 0m1.178s 00:05:34.271 sys 0m0.527s 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.271 12:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.271 ************************************ 00:05:34.271 END TEST default_locks 00:05:34.271 ************************************ 00:05:34.271 12:27:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:34.271 12:27:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.271 12:27:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.271 12:27:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.271 ************************************ 00:05:34.271 START TEST default_locks_via_rpc 00:05:34.271 ************************************ 00:05:34.271 12:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:34.271 12:27:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=903819 00:05:34.271 12:27:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.271 12:27:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 903819 00:05:34.271 12:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 903819 ']' 00:05:34.271 12:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.271 12:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.271 12:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.271 12:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.271 12:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.271 [2024-11-15 12:27:14.530170] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:34.271 [2024-11-15 12:27:14.530254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903819 ] 00:05:34.271 [2024-11-15 12:27:14.594744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.530 [2024-11-15 12:27:14.656261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 903819 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 903819 00:05:34.788 12:27:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.045 12:27:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 903819 00:05:35.045 12:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 903819 ']' 00:05:35.045 12:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 903819 00:05:35.045 12:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:35.045 12:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.045 12:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 903819 00:05:35.045 12:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.045 12:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.045 12:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 903819' 00:05:35.045 killing process with pid 903819 00:05:35.045 12:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 903819 00:05:35.045 12:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 903819 00:05:35.303 00:05:35.303 real 0m1.093s 00:05:35.303 user 0m1.089s 00:05:35.303 sys 0m0.469s 00:05:35.303 12:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.303 12:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.303 ************************************ 00:05:35.303 END TEST default_locks_via_rpc 00:05:35.303 ************************************ 00:05:35.303 12:27:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:35.303 12:27:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.303 12:27:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.303 12:27:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.303 ************************************ 00:05:35.303 START TEST non_locking_app_on_locked_coremask 00:05:35.303 ************************************ 00:05:35.303 12:27:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:35.303 12:27:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=903981 00:05:35.303 12:27:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.303 12:27:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 903981 /var/tmp/spdk.sock 00:05:35.303 12:27:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 903981 ']' 00:05:35.303 12:27:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.303 12:27:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.303 12:27:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.303 12:27:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.303 12:27:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.561 [2024-11-15 12:27:15.674780] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:35.561 [2024-11-15 12:27:15.674863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903981 ] 00:05:35.561 [2024-11-15 12:27:15.739655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.561 [2024-11-15 12:27:15.800087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.819 12:27:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.819 12:27:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.819 12:27:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=903989 00:05:35.819 12:27:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:35.819 12:27:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 903989 /var/tmp/spdk2.sock 00:05:35.819 12:27:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 903989 ']' 00:05:35.819 12:27:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.819 12:27:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.819 12:27:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.819 12:27:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.819 12:27:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.819 [2024-11-15 12:27:16.110279] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:35.819 [2024-11-15 12:27:16.110350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903989 ] 00:05:36.078 [2024-11-15 12:27:16.206778] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.078 [2024-11-15 12:27:16.206805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.078 [2024-11-15 12:27:16.317210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.011 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.011 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:37.011 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 903981 00:05:37.011 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.011 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 903981 00:05:37.269 lslocks: write error 00:05:37.269 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 903981 00:05:37.269 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 903981 ']' 00:05:37.269 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 903981 00:05:37.269 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:37.269 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.269 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 903981 00:05:37.269 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.269 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.269 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 903981' 00:05:37.269 killing process with pid 903981 00:05:37.269 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 903981 00:05:37.269 12:27:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 903981 00:05:38.203 12:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 903989 00:05:38.203 12:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 903989 ']' 00:05:38.203 12:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 903989 00:05:38.203 12:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:38.203 12:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.203 12:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 903989 00:05:38.203 12:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.203 12:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.203 12:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 903989' 00:05:38.203 killing process with pid 903989 00:05:38.203 12:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 903989 00:05:38.203 12:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 903989 00:05:38.462 00:05:38.462 real 0m3.150s 00:05:38.462 user 0m3.381s 00:05:38.462 sys 0m0.989s 00:05:38.462 12:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.462 12:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.462 ************************************ 00:05:38.462 END TEST non_locking_app_on_locked_coremask 00:05:38.462 ************************************ 00:05:38.462 12:27:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:38.462 12:27:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.462 12:27:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.462 12:27:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.720 ************************************ 00:05:38.720 START TEST locking_app_on_unlocked_coremask 00:05:38.720 ************************************ 00:05:38.720 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:38.720 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=904415 00:05:38.720 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:38.720 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 904415 /var/tmp/spdk.sock 00:05:38.720 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 904415 ']' 00:05:38.720 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.720 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.720 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.720 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.720 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.720 [2024-11-15 12:27:18.874930] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:38.720 [2024-11-15 12:27:18.875009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904415 ] 00:05:38.720 [2024-11-15 12:27:18.939480] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.720 [2024-11-15 12:27:18.939520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.720 [2024-11-15 12:27:19.000100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.979 12:27:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.979 12:27:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:38.979 12:27:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=904429 00:05:38.979 12:27:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:38.979 12:27:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 904429 /var/tmp/spdk2.sock 00:05:38.979 12:27:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 904429 ']' 00:05:38.979 12:27:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.979 12:27:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.979 12:27:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.979 12:27:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.979 12:27:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.979 [2024-11-15 12:27:19.310418] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:38.979 [2024-11-15 12:27:19.310490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904429 ] 00:05:39.237 [2024-11-15 12:27:19.407095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.237 [2024-11-15 12:27:19.520869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.172 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.172 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:40.172 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 904429 00:05:40.172 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 904429 00:05:40.172 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.430 lslocks: write error 00:05:40.430 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 904415 00:05:40.430 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 904415 ']' 00:05:40.430 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 904415 00:05:40.430 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.430 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.430 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 904415 00:05:40.430 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.430 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.430 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 904415' 00:05:40.430 killing process with pid 904415 00:05:40.430 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 904415 00:05:40.430 12:27:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 904415 00:05:41.364 12:27:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 904429 00:05:41.364 12:27:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 904429 ']' 00:05:41.364 12:27:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 904429 00:05:41.364 12:27:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:41.364 12:27:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.364 12:27:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 904429 00:05:41.364 12:27:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.364 12:27:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.364 12:27:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 904429' 00:05:41.364 killing process with pid 904429 00:05:41.364 12:27:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 904429 00:05:41.364 12:27:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 904429 00:05:41.931 00:05:41.931 real 0m3.182s 00:05:41.931 user 0m3.414s 00:05:41.931 sys 0m0.997s 00:05:41.931 12:27:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.931 12:27:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.931 ************************************ 00:05:41.931 END TEST locking_app_on_unlocked_coremask 00:05:41.931 ************************************ 00:05:41.931 12:27:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:41.931 12:27:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.931 12:27:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.931 12:27:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.931 ************************************ 00:05:41.931 START TEST locking_app_on_locked_coremask 00:05:41.931 ************************************ 00:05:41.931 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:41.931 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=904763 00:05:41.931 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.931 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 904763 /var/tmp/spdk.sock 00:05:41.931 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 904763 ']' 00:05:41.931 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.931 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.931 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.931 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.931 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.931 [2024-11-15 12:27:22.108904] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:41.931 [2024-11-15 12:27:22.108998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904763 ] 00:05:41.931 [2024-11-15 12:27:22.174279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.931 [2024-11-15 12:27:22.232831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.189 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.189 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:42.189 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=904863 00:05:42.189 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:42.189 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 904863 /var/tmp/spdk2.sock 00:05:42.190 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:42.190 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 904863 /var/tmp/spdk2.sock 00:05:42.190 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:42.190 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.190 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:42.190 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.190 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 904863 /var/tmp/spdk2.sock 00:05:42.190 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 904863 ']' 00:05:42.190 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.190 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.190 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.190 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.190 12:27:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.447 [2024-11-15 12:27:22.547690] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:42.447 [2024-11-15 12:27:22.547782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904863 ] 00:05:42.447 [2024-11-15 12:27:22.646209] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 904763 has claimed it. 00:05:42.447 [2024-11-15 12:27:22.646262] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:43.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (904863) - No such process 00:05:43.013 ERROR: process (pid: 904863) is no longer running 00:05:43.013 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.013 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:43.013 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:43.013 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.013 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:43.013 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.013 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 904763 00:05:43.013 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 904763 00:05:43.013 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.272 lslocks: write error 00:05:43.272 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 904763 00:05:43.272 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 904763 ']' 00:05:43.272 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 904763 00:05:43.272 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:43.272 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.272 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 904763 00:05:43.272 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.272 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.272 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 904763' 00:05:43.272 killing process with pid 904763 00:05:43.272 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 904763 00:05:43.272 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 904763 00:05:43.839 00:05:43.839 real 0m1.905s 00:05:43.839 user 0m2.126s 00:05:43.839 sys 0m0.601s 00:05:43.839 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.839 12:27:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.839 ************************************ 00:05:43.839 END TEST locking_app_on_locked_coremask 00:05:43.839 ************************************ 00:05:43.839 12:27:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:43.839 12:27:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.839 12:27:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.839 12:27:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.839 ************************************ 00:05:43.839 START TEST locking_overlapped_coremask 00:05:43.839 ************************************ 00:05:43.839 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:43.839 12:27:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=905028 00:05:43.839 12:27:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:43.839 12:27:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 905028 /var/tmp/spdk.sock 00:05:43.839 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 905028 ']' 00:05:43.839 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.839 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.839 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.839 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.839 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.839 [2024-11-15 12:27:24.066003] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:43.839 [2024-11-15 12:27:24.066101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905028 ] 00:05:43.839 [2024-11-15 12:27:24.129148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.098 [2024-11-15 12:27:24.185766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.098 [2024-11-15 12:27:24.185831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.098 [2024-11-15 12:27:24.185835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=905156 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 905156 /var/tmp/spdk2.sock 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 905156 /var/tmp/spdk2.sock 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 905156 /var/tmp/spdk2.sock 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 905156 ']' 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.355 12:27:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.355 [2024-11-15 12:27:24.509766] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:44.355 [2024-11-15 12:27:24.509863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905156 ] 00:05:44.355 [2024-11-15 12:27:24.614373] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 905028 has claimed it. 00:05:44.355 [2024-11-15 12:27:24.614443] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:44.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (905156) - No such process 00:05:44.920 ERROR: process (pid: 905156) is no longer running 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 905028 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 905028 ']' 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 905028 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.920 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 905028 00:05:45.178 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.178 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.178 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 905028' 00:05:45.178 killing process with pid 905028 00:05:45.178 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 905028 00:05:45.178 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 905028 00:05:45.438 00:05:45.438 real 0m1.664s 00:05:45.438 user 0m4.663s 00:05:45.438 sys 0m0.452s 00:05:45.438 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.438 12:27:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.438 ************************************ 00:05:45.438 END TEST locking_overlapped_coremask 00:05:45.438 ************************************ 00:05:45.438 12:27:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:45.438 12:27:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.438 12:27:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.438 12:27:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.438 ************************************ 00:05:45.438 START TEST locking_overlapped_coremask_via_rpc 00:05:45.438 ************************************ 00:05:45.438 12:27:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:45.438 12:27:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=905320 00:05:45.438 12:27:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:45.438 12:27:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 905320 /var/tmp/spdk.sock 00:05:45.438 12:27:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 905320 ']' 00:05:45.438 12:27:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.438 12:27:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.438 12:27:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.438 12:27:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.438 12:27:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.438 [2024-11-15 12:27:25.779808] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:45.438 [2024-11-15 12:27:25.779892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905320 ] 00:05:45.696 [2024-11-15 12:27:25.847430] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.696 [2024-11-15 12:27:25.847459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.696 [2024-11-15 12:27:25.905212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.696 [2024-11-15 12:27:25.905293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.696 [2024-11-15 12:27:25.905297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.954 12:27:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.954 12:27:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.954 12:27:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=905331 00:05:45.954 12:27:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:45.954 12:27:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 905331 /var/tmp/spdk2.sock 00:05:45.954 12:27:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 905331 ']' 00:05:45.954 12:27:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.954 12:27:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.954 12:27:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.954 12:27:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.954 12:27:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.954 [2024-11-15 12:27:26.243454] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:45.954 [2024-11-15 12:27:26.243536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905331 ] 00:05:46.212 [2024-11-15 12:27:26.346975] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.212 [2024-11-15 12:27:26.347011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.212 [2024-11-15 12:27:26.467925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.212 [2024-11-15 12:27:26.471775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:46.212 [2024-11-15 12:27:26.471779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.146 [2024-11-15 12:27:27.222825] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 905320 has claimed it. 00:05:47.146 request: 00:05:47.146 { 00:05:47.146 "method": "framework_enable_cpumask_locks", 00:05:47.146 "req_id": 1 00:05:47.146 } 00:05:47.146 Got JSON-RPC error response 00:05:47.146 response: 00:05:47.146 { 00:05:47.146 "code": -32603, 00:05:47.146 "message": "Failed to claim CPU core: 2" 00:05:47.146 } 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 905320 /var/tmp/spdk.sock 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 905320 ']' 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.146 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.403 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.403 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.403 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 905331 /var/tmp/spdk2.sock 00:05:47.403 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 905331 ']' 00:05:47.403 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.403 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.403 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.403 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.403 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.661 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.661 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.661 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:47.661 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:47.661 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:47.661 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:47.661 00:05:47.661 real 0m2.041s 00:05:47.661 user 0m1.100s 00:05:47.661 sys 0m0.181s 00:05:47.661 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.661 12:27:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.661 ************************************ 00:05:47.661 END TEST locking_overlapped_coremask_via_rpc 00:05:47.662 ************************************ 00:05:47.662 12:27:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:47.662 12:27:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 905320 ]] 00:05:47.662 12:27:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 905320 00:05:47.662 12:27:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 905320 ']' 00:05:47.662 12:27:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 905320 00:05:47.662 12:27:27 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:47.662 12:27:27 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.662 12:27:27 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 905320 00:05:47.662 12:27:27 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.662 12:27:27 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.662 12:27:27 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 905320' 00:05:47.662 killing process with pid 905320 00:05:47.662 12:27:27 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 905320 00:05:47.662 12:27:27 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 905320 00:05:47.919 12:27:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 905331 ]] 00:05:47.919 12:27:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 905331 00:05:47.919 12:27:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 905331 ']' 00:05:47.920 12:27:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 905331 00:05:47.920 12:27:28 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:47.920 12:27:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.920 12:27:28 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 905331 00:05:48.177 12:27:28 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:48.177 12:27:28 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:48.177 12:27:28 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 905331' 00:05:48.177 killing process with pid 905331 00:05:48.177 12:27:28 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 905331 00:05:48.177 12:27:28 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 905331 00:05:48.435 12:27:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:48.435 12:27:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:48.435 12:27:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 905320 ]] 00:05:48.435 12:27:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 905320 00:05:48.435 12:27:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 905320 ']' 00:05:48.435 12:27:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 905320 00:05:48.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (905320) - No such process 00:05:48.435 12:27:28 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 905320 is not found' 00:05:48.435 Process with pid 905320 is not found 00:05:48.435 12:27:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 905331 ]] 00:05:48.435 12:27:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 905331 00:05:48.435 12:27:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 905331 ']' 00:05:48.435 12:27:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 905331 00:05:48.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (905331) - No such process 00:05:48.435 12:27:28 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 905331 is not found' 00:05:48.435 Process with pid 905331 is not found 00:05:48.435 12:27:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:48.435 00:05:48.435 real 0m15.683s 00:05:48.435 user 0m28.524s 00:05:48.435 sys 0m5.163s 00:05:48.435 12:27:28 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.435 12:27:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.435 ************************************ 00:05:48.435 END TEST cpu_locks 00:05:48.435 ************************************ 00:05:48.435 00:05:48.435 real 0m40.544s 00:05:48.435 user 1m19.771s 00:05:48.435 sys 0m9.186s 00:05:48.435 12:27:28 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.435 12:27:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.435 ************************************ 00:05:48.435 END TEST event 00:05:48.435 ************************************ 00:05:48.435 12:27:28 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:48.435 12:27:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.435 12:27:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.435 12:27:28 -- common/autotest_common.sh@10 -- # set +x 00:05:48.693 ************************************ 00:05:48.693 START TEST thread 00:05:48.693 ************************************ 00:05:48.693 12:27:28 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:48.693 * Looking for test storage... 00:05:48.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:48.693 12:27:28 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:48.693 12:27:28 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:48.693 12:27:28 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.693 12:27:28 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.693 12:27:28 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.693 12:27:28 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.693 12:27:28 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.693 12:27:28 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.693 12:27:28 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.693 12:27:28 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.693 12:27:28 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.693 12:27:28 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.693 12:27:28 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.693 12:27:28 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.693 12:27:28 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.693 12:27:28 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:48.693 12:27:28 thread -- scripts/common.sh@345 -- # : 1 00:05:48.693 12:27:28 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.693 12:27:28 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.693 12:27:28 thread -- scripts/common.sh@365 -- # decimal 1 00:05:48.693 12:27:28 thread -- scripts/common.sh@353 -- # local d=1 00:05:48.693 12:27:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.693 12:27:28 thread -- scripts/common.sh@355 -- # echo 1 00:05:48.693 12:27:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.693 12:27:28 thread -- scripts/common.sh@366 -- # decimal 2 00:05:48.693 12:27:28 thread -- scripts/common.sh@353 -- # local d=2 00:05:48.693 12:27:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.693 12:27:28 thread -- scripts/common.sh@355 -- # echo 2 00:05:48.693 12:27:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.693 12:27:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.693 12:27:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.693 12:27:28 thread -- scripts/common.sh@368 -- # return 0 00:05:48.693 12:27:28 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.693 12:27:28 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.693 --rc genhtml_branch_coverage=1 00:05:48.693 --rc genhtml_function_coverage=1 00:05:48.693 --rc genhtml_legend=1 00:05:48.693 --rc geninfo_all_blocks=1 00:05:48.694 --rc geninfo_unexecuted_blocks=1 00:05:48.694 00:05:48.694 ' 00:05:48.694 12:27:28 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.694 --rc genhtml_branch_coverage=1 00:05:48.694 --rc genhtml_function_coverage=1 00:05:48.694 --rc genhtml_legend=1 00:05:48.694 --rc geninfo_all_blocks=1 00:05:48.694 --rc geninfo_unexecuted_blocks=1 00:05:48.694 00:05:48.694 ' 00:05:48.694 12:27:28 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.694 --rc genhtml_branch_coverage=1 00:05:48.694 --rc genhtml_function_coverage=1 00:05:48.694 --rc genhtml_legend=1 00:05:48.694 --rc geninfo_all_blocks=1 00:05:48.694 --rc geninfo_unexecuted_blocks=1 00:05:48.694 00:05:48.694 ' 00:05:48.694 12:27:28 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.694 --rc genhtml_branch_coverage=1 00:05:48.694 --rc genhtml_function_coverage=1 00:05:48.694 --rc genhtml_legend=1 00:05:48.694 --rc geninfo_all_blocks=1 00:05:48.694 --rc geninfo_unexecuted_blocks=1 00:05:48.694 00:05:48.694 ' 00:05:48.694 12:27:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:48.694 12:27:28 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:48.694 12:27:28 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.694 12:27:28 thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.694 ************************************ 00:05:48.694 START TEST thread_poller_perf 00:05:48.694 ************************************ 00:05:48.694 12:27:28 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:48.694 [2024-11-15 12:27:28.973429] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:48.694 [2024-11-15 12:27:28.973506] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905823 ] 00:05:48.952 [2024-11-15 12:27:29.040122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.952 [2024-11-15 12:27:29.095818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.952 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:49.886 [2024-11-15T11:27:30.230Z] ====================================== 00:05:49.886 [2024-11-15T11:27:30.230Z] busy:2710188381 (cyc) 00:05:49.886 [2024-11-15T11:27:30.230Z] total_run_count: 364000 00:05:49.886 [2024-11-15T11:27:30.230Z] tsc_hz: 2700000000 (cyc) 00:05:49.886 [2024-11-15T11:27:30.230Z] ====================================== 00:05:49.886 [2024-11-15T11:27:30.230Z] poller_cost: 7445 (cyc), 2757 (nsec) 00:05:49.886 00:05:49.886 real 0m1.207s 00:05:49.886 user 0m1.130s 00:05:49.886 sys 0m0.071s 00:05:49.886 12:27:30 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.886 12:27:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.886 ************************************ 00:05:49.886 END TEST thread_poller_perf 00:05:49.886 ************************************ 00:05:49.886 12:27:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:49.886 12:27:30 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:49.886 12:27:30 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.886 12:27:30 thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.886 ************************************ 00:05:49.886 START TEST thread_poller_perf 00:05:49.886 ************************************ 00:05:49.886 12:27:30 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:50.144 [2024-11-15 12:27:30.234592] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:50.144 [2024-11-15 12:27:30.234660] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905979 ] 00:05:50.144 [2024-11-15 12:27:30.300160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.144 [2024-11-15 12:27:30.357290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.144 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:51.518 [2024-11-15T11:27:31.862Z] ====================================== 00:05:51.518 [2024-11-15T11:27:31.862Z] busy:2702532636 (cyc) 00:05:51.518 [2024-11-15T11:27:31.862Z] total_run_count: 4675000 00:05:51.518 [2024-11-15T11:27:31.862Z] tsc_hz: 2700000000 (cyc) 00:05:51.518 [2024-11-15T11:27:31.862Z] ====================================== 00:05:51.518 [2024-11-15T11:27:31.862Z] poller_cost: 578 (cyc), 214 (nsec) 00:05:51.518 00:05:51.518 real 0m1.202s 00:05:51.518 user 0m1.139s 00:05:51.518 sys 0m0.058s 00:05:51.518 12:27:31 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.518 12:27:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.518 ************************************ 00:05:51.518 END TEST thread_poller_perf 00:05:51.518 ************************************ 00:05:51.518 12:27:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:51.518 00:05:51.518 real 0m2.647s 00:05:51.518 user 0m2.397s 00:05:51.518 sys 0m0.253s 00:05:51.518 12:27:31 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.518 12:27:31 thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.518 ************************************ 00:05:51.518 END TEST thread 00:05:51.518 ************************************ 00:05:51.518 12:27:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:51.518 12:27:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:51.518 12:27:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.518 12:27:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.518 12:27:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.518 ************************************ 00:05:51.518 START TEST app_cmdline 00:05:51.518 ************************************ 00:05:51.518 12:27:31 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:51.518 * Looking for test storage... 00:05:51.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:51.518 12:27:31 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.518 12:27:31 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.518 12:27:31 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.518 12:27:31 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.518 12:27:31 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.519 12:27:31 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:51.519 12:27:31 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.519 12:27:31 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.519 --rc genhtml_branch_coverage=1 00:05:51.519 --rc genhtml_function_coverage=1 00:05:51.519 --rc genhtml_legend=1 00:05:51.519 --rc geninfo_all_blocks=1 00:05:51.519 --rc geninfo_unexecuted_blocks=1 00:05:51.519 00:05:51.519 ' 00:05:51.519 12:27:31 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.519 --rc genhtml_branch_coverage=1 00:05:51.519 --rc genhtml_function_coverage=1 00:05:51.519 --rc genhtml_legend=1 00:05:51.519 --rc geninfo_all_blocks=1 00:05:51.519 --rc geninfo_unexecuted_blocks=1 00:05:51.519 00:05:51.519 ' 00:05:51.519 12:27:31 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.519 --rc genhtml_branch_coverage=1 00:05:51.519 --rc genhtml_function_coverage=1 00:05:51.519 --rc genhtml_legend=1 00:05:51.519 --rc geninfo_all_blocks=1 00:05:51.519 --rc geninfo_unexecuted_blocks=1 00:05:51.519 00:05:51.519 ' 00:05:51.519 12:27:31 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.519 --rc genhtml_branch_coverage=1 00:05:51.519 --rc genhtml_function_coverage=1 00:05:51.519 --rc genhtml_legend=1 00:05:51.519 --rc geninfo_all_blocks=1 00:05:51.519 --rc geninfo_unexecuted_blocks=1 00:05:51.519 00:05:51.519 ' 00:05:51.519 12:27:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:51.519 12:27:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=906193 00:05:51.519 12:27:31 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:51.519 12:27:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 906193 00:05:51.519 12:27:31 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 906193 ']' 00:05:51.519 12:27:31 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.519 12:27:31 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.519 12:27:31 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.519 12:27:31 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.519 12:27:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.519 [2024-11-15 12:27:31.686998] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:51.519 [2024-11-15 12:27:31.687092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906193 ] 00:05:51.519 [2024-11-15 12:27:31.753865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.519 [2024-11-15 12:27:31.812594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.777 12:27:32 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.777 12:27:32 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:51.777 12:27:32 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:52.035 { 00:05:52.035 "version": "SPDK v25.01-pre git sha1 c46ddd981", 00:05:52.035 "fields": { 00:05:52.035 "major": 25, 00:05:52.035 "minor": 1, 00:05:52.035 "patch": 0, 00:05:52.035 "suffix": "-pre", 00:05:52.035 "commit": "c46ddd981" 00:05:52.035 } 00:05:52.035 } 00:05:52.035 12:27:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:52.035 12:27:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:52.035 12:27:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:52.035 12:27:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:52.035 12:27:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:52.035 12:27:32 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.035 12:27:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:52.035 12:27:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:52.035 12:27:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:52.035 12:27:32 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.291 12:27:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:52.291 12:27:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:52.291 12:27:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:52.291 12:27:32 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:52.291 12:27:32 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:52.291 12:27:32 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:52.291 12:27:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.291 12:27:32 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:52.291 12:27:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.291 12:27:32 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:52.291 12:27:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.291 12:27:32 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:52.291 12:27:32 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:52.291 12:27:32 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:52.549 request: 00:05:52.549 { 00:05:52.549 "method": "env_dpdk_get_mem_stats", 00:05:52.549 "req_id": 1 00:05:52.549 } 00:05:52.549 Got JSON-RPC error response 00:05:52.549 response: 00:05:52.549 { 00:05:52.549 "code": -32601, 00:05:52.549 "message": "Method not found" 00:05:52.549 } 00:05:52.549 12:27:32 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:52.549 12:27:32 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.549 12:27:32 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:52.549 12:27:32 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.549 12:27:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 906193 00:05:52.549 12:27:32 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 906193 ']' 00:05:52.549 12:27:32 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 906193 00:05:52.549 12:27:32 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:52.549 12:27:32 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.549 12:27:32 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 906193 00:05:52.549 12:27:32 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.549 12:27:32 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.549 12:27:32 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 906193' 00:05:52.549 killing process with pid 906193 00:05:52.549 12:27:32 app_cmdline -- common/autotest_common.sh@973 -- # kill 906193 00:05:52.549 12:27:32 app_cmdline -- common/autotest_common.sh@978 -- # wait 906193 00:05:52.807 00:05:52.807 real 0m1.642s 00:05:52.807 user 0m2.039s 00:05:52.807 sys 0m0.483s 00:05:52.807 12:27:33 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.807 12:27:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:52.807 ************************************ 00:05:52.807 END TEST app_cmdline 00:05:52.807 ************************************ 00:05:53.066 12:27:33 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:53.066 12:27:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.066 12:27:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.066 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:05:53.066 ************************************ 00:05:53.066 START TEST version 00:05:53.066 ************************************ 00:05:53.066 12:27:33 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:53.066 * Looking for test storage... 00:05:53.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:53.066 12:27:33 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.066 12:27:33 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.066 12:27:33 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.066 12:27:33 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.066 12:27:33 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.066 12:27:33 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.066 12:27:33 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.066 12:27:33 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.066 12:27:33 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.066 12:27:33 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.066 12:27:33 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.066 12:27:33 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.066 12:27:33 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.066 12:27:33 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.066 12:27:33 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.066 12:27:33 version -- scripts/common.sh@344 -- # case "$op" in 00:05:53.066 12:27:33 version -- scripts/common.sh@345 -- # : 1 00:05:53.066 12:27:33 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.066 12:27:33 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.066 12:27:33 version -- scripts/common.sh@365 -- # decimal 1 00:05:53.066 12:27:33 version -- scripts/common.sh@353 -- # local d=1 00:05:53.066 12:27:33 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.066 12:27:33 version -- scripts/common.sh@355 -- # echo 1 00:05:53.066 12:27:33 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.066 12:27:33 version -- scripts/common.sh@366 -- # decimal 2 00:05:53.066 12:27:33 version -- scripts/common.sh@353 -- # local d=2 00:05:53.066 12:27:33 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.066 12:27:33 version -- scripts/common.sh@355 -- # echo 2 00:05:53.066 12:27:33 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.066 12:27:33 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.066 12:27:33 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.066 12:27:33 version -- scripts/common.sh@368 -- # return 0 00:05:53.066 12:27:33 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.066 12:27:33 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.066 --rc genhtml_branch_coverage=1 00:05:53.066 --rc genhtml_function_coverage=1 00:05:53.066 --rc genhtml_legend=1 00:05:53.066 --rc geninfo_all_blocks=1 00:05:53.066 --rc geninfo_unexecuted_blocks=1 00:05:53.066 00:05:53.066 ' 00:05:53.066 12:27:33 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.066 --rc genhtml_branch_coverage=1 00:05:53.066 --rc genhtml_function_coverage=1 00:05:53.066 --rc genhtml_legend=1 00:05:53.066 --rc geninfo_all_blocks=1 00:05:53.066 --rc geninfo_unexecuted_blocks=1 00:05:53.066 00:05:53.066 ' 00:05:53.066 12:27:33 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.066 --rc genhtml_branch_coverage=1 00:05:53.066 --rc genhtml_function_coverage=1 00:05:53.067 --rc genhtml_legend=1 00:05:53.067 --rc geninfo_all_blocks=1 00:05:53.067 --rc geninfo_unexecuted_blocks=1 00:05:53.067 00:05:53.067 ' 00:05:53.067 12:27:33 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.067 --rc genhtml_branch_coverage=1 00:05:53.067 --rc genhtml_function_coverage=1 00:05:53.067 --rc genhtml_legend=1 00:05:53.067 --rc geninfo_all_blocks=1 00:05:53.067 --rc geninfo_unexecuted_blocks=1 00:05:53.067 00:05:53.067 ' 00:05:53.067 12:27:33 version -- app/version.sh@17 -- # get_header_version major 00:05:53.067 12:27:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:53.067 12:27:33 version -- app/version.sh@14 -- # cut -f2 00:05:53.067 12:27:33 version -- app/version.sh@14 -- # tr -d '"' 00:05:53.067 12:27:33 version -- app/version.sh@17 -- # major=25 00:05:53.067 12:27:33 version -- app/version.sh@18 -- # get_header_version minor 00:05:53.067 12:27:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:53.067 12:27:33 version -- app/version.sh@14 -- # cut -f2 00:05:53.067 12:27:33 version -- app/version.sh@14 -- # tr -d '"' 00:05:53.067 12:27:33 version -- app/version.sh@18 -- # minor=1 00:05:53.067 12:27:33 version -- app/version.sh@19 -- # get_header_version patch 00:05:53.067 12:27:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:53.067 12:27:33 version -- app/version.sh@14 -- # cut -f2 00:05:53.067 12:27:33 version -- app/version.sh@14 -- # tr -d '"' 00:05:53.067 12:27:33 version -- app/version.sh@19 -- # patch=0 00:05:53.067 12:27:33 version -- app/version.sh@20 -- # get_header_version suffix 00:05:53.067 12:27:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:53.067 12:27:33 version -- app/version.sh@14 -- # cut -f2 00:05:53.067 12:27:33 version -- app/version.sh@14 -- # tr -d '"' 00:05:53.067 12:27:33 version -- app/version.sh@20 -- # suffix=-pre 00:05:53.067 12:27:33 version -- app/version.sh@22 -- # version=25.1 00:05:53.067 12:27:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:53.067 12:27:33 version -- app/version.sh@28 -- # version=25.1rc0 00:05:53.067 12:27:33 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:53.067 12:27:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:53.067 12:27:33 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:53.067 12:27:33 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:53.067 00:05:53.067 real 0m0.199s 00:05:53.067 user 0m0.129s 00:05:53.067 sys 0m0.097s 00:05:53.067 12:27:33 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.067 12:27:33 version -- common/autotest_common.sh@10 -- # set +x 00:05:53.067 ************************************ 00:05:53.067 END TEST version 00:05:53.067 ************************************ 00:05:53.067 12:27:33 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:53.067 12:27:33 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:53.067 12:27:33 -- spdk/autotest.sh@194 -- # uname -s 00:05:53.067 12:27:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:53.067 12:27:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:53.067 12:27:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:53.067 12:27:33 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:53.067 12:27:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:53.067 12:27:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:53.067 12:27:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:53.067 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:05:53.326 12:27:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:53.326 12:27:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:53.326 12:27:33 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:53.326 12:27:33 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:53.326 12:27:33 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:53.326 12:27:33 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:53.326 12:27:33 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:53.326 12:27:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:53.326 12:27:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.326 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:05:53.326 ************************************ 00:05:53.326 START TEST nvmf_tcp 00:05:53.326 ************************************ 00:05:53.326 12:27:33 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:53.326 * Looking for test storage... 00:05:53.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:53.326 12:27:33 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.326 12:27:33 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.326 12:27:33 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.326 12:27:33 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.326 12:27:33 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:53.326 12:27:33 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.326 12:27:33 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.326 --rc genhtml_branch_coverage=1 00:05:53.326 --rc genhtml_function_coverage=1 00:05:53.326 --rc genhtml_legend=1 00:05:53.326 --rc geninfo_all_blocks=1 00:05:53.326 --rc geninfo_unexecuted_blocks=1 00:05:53.326 00:05:53.326 ' 00:05:53.326 12:27:33 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.326 --rc genhtml_branch_coverage=1 00:05:53.326 --rc genhtml_function_coverage=1 00:05:53.326 --rc genhtml_legend=1 00:05:53.326 --rc geninfo_all_blocks=1 00:05:53.326 --rc geninfo_unexecuted_blocks=1 00:05:53.326 00:05:53.326 ' 00:05:53.326 12:27:33 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.326 --rc genhtml_branch_coverage=1 00:05:53.326 --rc genhtml_function_coverage=1 00:05:53.326 --rc genhtml_legend=1 00:05:53.326 --rc geninfo_all_blocks=1 00:05:53.326 --rc geninfo_unexecuted_blocks=1 00:05:53.326 00:05:53.326 ' 00:05:53.326 12:27:33 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.326 --rc genhtml_branch_coverage=1 00:05:53.326 --rc genhtml_function_coverage=1 00:05:53.326 --rc genhtml_legend=1 00:05:53.326 --rc geninfo_all_blocks=1 00:05:53.326 --rc geninfo_unexecuted_blocks=1 00:05:53.326 00:05:53.326 ' 00:05:53.326 12:27:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:53.326 12:27:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:53.326 12:27:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:53.326 12:27:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:53.326 12:27:33 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.326 12:27:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.326 ************************************ 00:05:53.326 START TEST nvmf_target_core 00:05:53.326 ************************************ 00:05:53.326 12:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:53.585 * Looking for test storage... 00:05:53.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.585 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.586 --rc genhtml_branch_coverage=1 00:05:53.586 --rc genhtml_function_coverage=1 00:05:53.586 --rc genhtml_legend=1 00:05:53.586 --rc geninfo_all_blocks=1 00:05:53.586 --rc geninfo_unexecuted_blocks=1 00:05:53.586 00:05:53.586 ' 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.586 --rc genhtml_branch_coverage=1 00:05:53.586 --rc genhtml_function_coverage=1 00:05:53.586 --rc genhtml_legend=1 00:05:53.586 --rc geninfo_all_blocks=1 00:05:53.586 --rc geninfo_unexecuted_blocks=1 00:05:53.586 00:05:53.586 ' 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.586 --rc genhtml_branch_coverage=1 00:05:53.586 --rc genhtml_function_coverage=1 00:05:53.586 --rc genhtml_legend=1 00:05:53.586 --rc geninfo_all_blocks=1 00:05:53.586 --rc geninfo_unexecuted_blocks=1 00:05:53.586 00:05:53.586 ' 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.586 --rc genhtml_branch_coverage=1 00:05:53.586 --rc genhtml_function_coverage=1 00:05:53.586 --rc genhtml_legend=1 00:05:53.586 --rc geninfo_all_blocks=1 00:05:53.586 --rc geninfo_unexecuted_blocks=1 00:05:53.586 00:05:53.586 ' 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:53.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:53.586 ************************************ 00:05:53.586 START TEST nvmf_abort 00:05:53.586 ************************************ 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:53.586 * Looking for test storage... 00:05:53.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.586 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.845 --rc genhtml_branch_coverage=1 00:05:53.845 --rc genhtml_function_coverage=1 00:05:53.845 --rc genhtml_legend=1 00:05:53.845 --rc geninfo_all_blocks=1 00:05:53.845 --rc geninfo_unexecuted_blocks=1 00:05:53.845 00:05:53.845 ' 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.845 --rc genhtml_branch_coverage=1 00:05:53.845 --rc genhtml_function_coverage=1 00:05:53.845 --rc genhtml_legend=1 00:05:53.845 --rc geninfo_all_blocks=1 00:05:53.845 --rc geninfo_unexecuted_blocks=1 00:05:53.845 00:05:53.845 ' 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.845 --rc genhtml_branch_coverage=1 00:05:53.845 --rc genhtml_function_coverage=1 00:05:53.845 --rc genhtml_legend=1 00:05:53.845 --rc geninfo_all_blocks=1 00:05:53.845 --rc geninfo_unexecuted_blocks=1 00:05:53.845 00:05:53.845 ' 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.845 --rc genhtml_branch_coverage=1 00:05:53.845 --rc genhtml_function_coverage=1 00:05:53.845 --rc genhtml_legend=1 00:05:53.845 --rc geninfo_all_blocks=1 00:05:53.845 --rc geninfo_unexecuted_blocks=1 00:05:53.845 00:05:53.845 ' 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.845 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:53.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:53.846 12:27:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:56.378 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:56.378 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:56.378 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:56.378 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:56.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:56.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:05:56.378 00:05:56.378 --- 10.0.0.2 ping statistics --- 00:05:56.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:56.378 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:56.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:56.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:05:56.378 00:05:56.378 --- 10.0.0.1 ping statistics --- 00:05:56.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:56.378 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:05:56.378 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=908280 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 908280 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 908280 ']' 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.379 [2024-11-15 12:27:36.350111] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:05:56.379 [2024-11-15 12:27:36.350205] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:56.379 [2024-11-15 12:27:36.421502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.379 [2024-11-15 12:27:36.476482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:56.379 [2024-11-15 12:27:36.476539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:56.379 [2024-11-15 12:27:36.476567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:56.379 [2024-11-15 12:27:36.476579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:56.379 [2024-11-15 12:27:36.476592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:56.379 [2024-11-15 12:27:36.478215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.379 [2024-11-15 12:27:36.478277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.379 [2024-11-15 12:27:36.478281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.379 [2024-11-15 12:27:36.625623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.379 Malloc0 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.379 Delay0 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.379 [2024-11-15 12:27:36.692912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.379 12:27:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:56.637 [2024-11-15 12:27:36.808526] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:59.166 Initializing NVMe Controllers 00:05:59.166 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:59.166 controller IO queue size 128 less than required 00:05:59.166 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:59.167 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:59.167 Initialization complete. Launching workers. 00:05:59.167 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28178 00:05:59.167 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28239, failed to submit 62 00:05:59.167 success 28182, unsuccessful 57, failed 0 00:05:59.167 12:27:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:59.167 12:27:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.167 12:27:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:59.167 12:27:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.167 12:27:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:59.167 12:27:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:59.167 12:27:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:59.167 12:27:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:59.167 12:27:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:59.167 12:27:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:59.167 12:27:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:59.167 12:27:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:59.167 rmmod nvme_tcp 00:05:59.167 rmmod nvme_fabrics 00:05:59.167 rmmod nvme_keyring 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 908280 ']' 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 908280 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 908280 ']' 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 908280 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 908280 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 908280' 00:05:59.167 killing process with pid 908280 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 908280 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 908280 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:59.167 12:27:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:01.074 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:01.074 00:06:01.074 real 0m7.527s 00:06:01.074 user 0m10.838s 00:06:01.074 sys 0m2.698s 00:06:01.074 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.074 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.074 ************************************ 00:06:01.074 END TEST nvmf_abort 00:06:01.074 ************************************ 00:06:01.074 12:27:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:01.074 12:27:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:01.074 12:27:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.074 12:27:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:01.074 ************************************ 00:06:01.074 START TEST nvmf_ns_hotplug_stress 00:06:01.074 ************************************ 00:06:01.074 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:01.334 * Looking for test storage... 00:06:01.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.334 --rc genhtml_branch_coverage=1 00:06:01.334 --rc genhtml_function_coverage=1 00:06:01.334 --rc genhtml_legend=1 00:06:01.334 --rc geninfo_all_blocks=1 00:06:01.334 --rc geninfo_unexecuted_blocks=1 00:06:01.334 00:06:01.334 ' 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.334 --rc genhtml_branch_coverage=1 00:06:01.334 --rc genhtml_function_coverage=1 00:06:01.334 --rc genhtml_legend=1 00:06:01.334 --rc geninfo_all_blocks=1 00:06:01.334 --rc geninfo_unexecuted_blocks=1 00:06:01.334 00:06:01.334 ' 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.334 --rc genhtml_branch_coverage=1 00:06:01.334 --rc genhtml_function_coverage=1 00:06:01.334 --rc genhtml_legend=1 00:06:01.334 --rc geninfo_all_blocks=1 00:06:01.334 --rc geninfo_unexecuted_blocks=1 00:06:01.334 00:06:01.334 ' 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.334 --rc genhtml_branch_coverage=1 00:06:01.334 --rc genhtml_function_coverage=1 00:06:01.334 --rc genhtml_legend=1 00:06:01.334 --rc geninfo_all_blocks=1 00:06:01.334 --rc geninfo_unexecuted_blocks=1 00:06:01.334 00:06:01.334 ' 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:01.334 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:01.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:01.335 12:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:03.865 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:03.865 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:03.865 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:03.865 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:03.865 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:03.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:03.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:06:03.866 00:06:03.866 --- 10.0.0.2 ping statistics --- 00:06:03.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.866 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:03.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:03.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:06:03.866 00:06:03.866 --- 10.0.0.1 ping statistics --- 00:06:03.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.866 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:03.866 12:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:03.866 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:03.866 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:03.866 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.866 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:03.866 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=910640 00:06:03.866 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:03.866 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 910640 00:06:03.866 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 910640 ']' 00:06:03.866 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.866 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.866 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.866 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.866 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:03.866 [2024-11-15 12:27:44.051857] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:03.866 [2024-11-15 12:27:44.051935] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:03.866 [2024-11-15 12:27:44.123883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.866 [2024-11-15 12:27:44.183794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:03.866 [2024-11-15 12:27:44.183855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:03.866 [2024-11-15 12:27:44.183868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:03.866 [2024-11-15 12:27:44.183879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:03.866 [2024-11-15 12:27:44.183889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:03.866 [2024-11-15 12:27:44.185491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.866 [2024-11-15 12:27:44.185553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.866 [2024-11-15 12:27:44.185557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.124 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.124 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:04.124 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:04.124 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:04.124 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.124 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:04.124 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:04.124 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:04.381 [2024-11-15 12:27:44.582011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.382 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:04.661 12:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:04.917 [2024-11-15 12:27:45.120962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:04.917 12:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:05.173 12:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:05.431 Malloc0 00:06:05.431 12:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:05.764 Delay0 00:06:05.764 12:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.020 12:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:06.277 NULL1 00:06:06.277 12:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:06.533 12:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=910943 00:06:06.533 12:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:06.533 12:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:06.533 12:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.904 Read completed with error (sct=0, sc=11) 00:06:07.904 12:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.905 12:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:07.905 12:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:08.162 true 00:06:08.162 12:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:08.162 12:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.095 12:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.354 12:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:09.354 12:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:09.612 true 00:06:09.612 12:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:09.612 12:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.870 12:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.128 12:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:10.128 12:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:10.386 true 00:06:10.386 12:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:10.386 12:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.643 12:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.901 12:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:10.901 12:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:11.158 true 00:06:11.158 12:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:11.158 12:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.091 12:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.348 12:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:12.348 12:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:12.606 true 00:06:12.606 12:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:12.606 12:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.864 12:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.121 12:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:13.122 12:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:13.379 true 00:06:13.379 12:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:13.379 12:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.311 12:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.567 12:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:14.567 12:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:14.824 true 00:06:14.824 12:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:14.824 12:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.083 12:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.381 12:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:15.381 12:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:15.682 true 00:06:15.682 12:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:15.682 12:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.961 12:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.218 12:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:16.218 12:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:16.475 true 00:06:16.475 12:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:16.475 12:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.407 12:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.665 12:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:17.665 12:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:17.922 true 00:06:18.179 12:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:18.179 12:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.437 12:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.694 12:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:18.694 12:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:18.951 true 00:06:18.951 12:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:18.951 12:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.208 12:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.466 12:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:19.467 12:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:19.725 true 00:06:19.725 12:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:19.725 12:27:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.657 12:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.915 12:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:20.915 12:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:21.173 true 00:06:21.173 12:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:21.173 12:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.429 12:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.687 12:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:21.687 12:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:21.945 true 00:06:21.945 12:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:21.945 12:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.203 12:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.461 12:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:22.461 12:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:22.718 true 00:06:22.718 12:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:22.718 12:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.648 12:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.905 12:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:23.905 12:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:24.163 true 00:06:24.163 12:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:24.163 12:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.420 12:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.676 12:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:24.677 12:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:24.933 true 00:06:24.933 12:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:24.933 12:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.190 12:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.756 12:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:25.756 12:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:25.756 true 00:06:25.756 12:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:25.756 12:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.690 12:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.947 12:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:26.947 12:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:27.205 true 00:06:27.464 12:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:27.464 12:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.722 12:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.980 12:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:27.980 12:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:28.238 true 00:06:28.238 12:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:28.238 12:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.170 12:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.170 12:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:29.170 12:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:29.427 true 00:06:29.427 12:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:29.427 12:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.685 12:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.943 12:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:29.943 12:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:30.201 true 00:06:30.458 12:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:30.458 12:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.716 12:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.974 12:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:30.974 12:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:31.231 true 00:06:31.231 12:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:31.231 12:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.165 12:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.422 12:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:32.422 12:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:32.679 true 00:06:32.679 12:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:32.679 12:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.937 12:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.195 12:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:33.195 12:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:33.452 true 00:06:33.452 12:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:33.452 12:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.383 12:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.383 12:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:34.383 12:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:34.641 true 00:06:34.641 12:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:34.641 12:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.898 12:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.155 12:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:35.155 12:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:35.412 true 00:06:35.412 12:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:35.412 12:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.976 12:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.976 12:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:35.976 12:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:36.233 true 00:06:36.233 12:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:36.233 12:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.165 Initializing NVMe Controllers 00:06:37.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:37.165 Controller IO queue size 128, less than required. 00:06:37.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:37.165 Controller IO queue size 128, less than required. 00:06:37.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:37.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:37.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:37.165 Initialization complete. Launching workers. 00:06:37.165 ======================================================== 00:06:37.165 Latency(us) 00:06:37.165 Device Information : IOPS MiB/s Average min max 00:06:37.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 485.97 0.24 115963.18 3364.70 1024145.89 00:06:37.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8984.20 4.39 14247.16 3164.11 446259.42 00:06:37.165 ======================================================== 00:06:37.165 Total : 9470.16 4.62 19466.78 3164.11 1024145.89 00:06:37.165 00:06:37.165 12:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.422 12:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:37.422 12:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:37.678 true 00:06:37.678 12:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 910943 00:06:37.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (910943) - No such process 00:06:37.678 12:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 910943 00:06:37.678 12:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.243 12:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.243 12:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:38.243 12:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:38.243 12:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:38.243 12:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.243 12:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:38.500 null0 00:06:38.500 12:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.500 12:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.500 12:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:38.758 null1 00:06:38.758 12:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.758 12:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.758 12:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:39.323 null2 00:06:39.323 12:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:39.323 12:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:39.323 12:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:39.323 null3 00:06:39.323 12:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:39.323 12:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:39.323 12:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:39.888 null4 00:06:39.888 12:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:39.888 12:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:39.888 12:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:39.888 null5 00:06:39.888 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:39.888 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:39.888 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:40.453 null6 00:06:40.453 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:40.453 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:40.453 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:40.453 null7 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:40.711 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 915138 915139 915140 915143 915145 915147 915149 915151 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.712 12:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.969 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.969 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.969 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.969 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.969 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.969 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.969 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.969 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.227 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.485 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.486 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.486 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.486 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.486 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.486 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.486 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.486 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.744 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.744 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.744 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.744 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.744 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.744 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.744 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.744 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.744 12:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.744 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.744 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.744 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.744 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.744 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.744 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.744 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.744 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.744 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.744 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.744 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.744 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.744 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.744 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.744 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.002 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.002 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.002 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.002 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.002 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.002 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.002 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.002 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.260 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.260 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.260 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.260 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.260 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.260 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.260 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.260 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.260 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.260 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.260 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.260 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.518 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.518 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.518 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.518 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.518 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.518 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.518 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.518 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.518 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.518 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.518 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.518 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.776 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.776 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.776 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.776 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.776 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.776 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.776 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.776 12:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.034 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:43.292 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.292 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:43.292 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:43.292 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.292 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:43.292 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:43.292 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.292 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:43.550 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.550 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.550 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:43.550 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.550 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.550 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:43.550 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.550 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.550 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:43.550 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.550 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.550 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:43.550 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.550 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.551 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:43.551 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.551 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.551 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.551 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.551 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:43.551 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:43.551 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.551 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.551 12:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:43.809 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.809 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:43.809 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:43.809 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.809 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:43.809 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:43.809 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.809 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.068 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.326 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.327 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.327 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.327 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.585 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.585 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.585 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.585 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.585 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.585 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.585 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.585 12:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.843 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.844 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.844 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.844 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.844 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.844 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.102 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.102 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.102 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.102 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.102 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.102 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.102 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.102 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.385 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.643 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.643 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.643 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.643 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.643 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.643 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.643 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.643 12:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.901 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.901 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.902 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:46.160 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:46.417 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:46.418 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.418 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:46.418 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:46.418 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.418 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.418 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:46.675 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:46.675 rmmod nvme_tcp 00:06:46.676 rmmod nvme_fabrics 00:06:46.676 rmmod nvme_keyring 00:06:46.676 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:46.676 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:46.676 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:46.676 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 910640 ']' 00:06:46.676 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 910640 00:06:46.676 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 910640 ']' 00:06:46.676 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 910640 00:06:46.676 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:46.676 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.676 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 910640 00:06:46.676 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:46.676 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:46.676 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 910640' 00:06:46.676 killing process with pid 910640 00:06:46.676 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 910640 00:06:46.676 12:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 910640 00:06:46.935 12:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:46.935 12:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:46.935 12:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:46.935 12:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:46.935 12:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:46.935 12:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:46.935 12:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:46.935 12:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:46.935 12:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:46.935 12:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.935 12:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.936 12:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.473 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:49.473 00:06:49.473 real 0m47.826s 00:06:49.473 user 3m41.321s 00:06:49.473 sys 0m16.160s 00:06:49.473 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.473 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:49.473 ************************************ 00:06:49.473 END TEST nvmf_ns_hotplug_stress 00:06:49.473 ************************************ 00:06:49.473 12:28:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:49.473 12:28:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.473 12:28:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.473 12:28:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:49.473 ************************************ 00:06:49.473 START TEST nvmf_delete_subsystem 00:06:49.473 ************************************ 00:06:49.473 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:49.473 * Looking for test storage... 00:06:49.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.473 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.473 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.473 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.473 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.473 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.474 --rc genhtml_branch_coverage=1 00:06:49.474 --rc genhtml_function_coverage=1 00:06:49.474 --rc genhtml_legend=1 00:06:49.474 --rc geninfo_all_blocks=1 00:06:49.474 --rc geninfo_unexecuted_blocks=1 00:06:49.474 00:06:49.474 ' 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.474 --rc genhtml_branch_coverage=1 00:06:49.474 --rc genhtml_function_coverage=1 00:06:49.474 --rc genhtml_legend=1 00:06:49.474 --rc geninfo_all_blocks=1 00:06:49.474 --rc geninfo_unexecuted_blocks=1 00:06:49.474 00:06:49.474 ' 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.474 --rc genhtml_branch_coverage=1 00:06:49.474 --rc genhtml_function_coverage=1 00:06:49.474 --rc genhtml_legend=1 00:06:49.474 --rc geninfo_all_blocks=1 00:06:49.474 --rc geninfo_unexecuted_blocks=1 00:06:49.474 00:06:49.474 ' 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.474 --rc genhtml_branch_coverage=1 00:06:49.474 --rc genhtml_function_coverage=1 00:06:49.474 --rc genhtml_legend=1 00:06:49.474 --rc geninfo_all_blocks=1 00:06:49.474 --rc geninfo_unexecuted_blocks=1 00:06:49.474 00:06:49.474 ' 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:49.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:49.474 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:49.475 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.475 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:49.475 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:49.475 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:49.475 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.475 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.475 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.475 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:49.475 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:49.475 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:49.475 12:28:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:51.381 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:51.381 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:51.381 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:51.381 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.381 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:51.382 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:51.382 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.382 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:51.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:06:51.640 00:06:51.640 --- 10.0.0.2 ping statistics --- 00:06:51.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.640 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:06:51.640 00:06:51.640 --- 10.0.0.1 ping statistics --- 00:06:51.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.640 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=918043 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 918043 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 918043 ']' 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.640 12:28:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.640 [2024-11-15 12:28:31.934373] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:06:51.640 [2024-11-15 12:28:31.934455] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.899 [2024-11-15 12:28:32.006373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.899 [2024-11-15 12:28:32.063664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.899 [2024-11-15 12:28:32.063733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.899 [2024-11-15 12:28:32.063750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.899 [2024-11-15 12:28:32.063777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.899 [2024-11-15 12:28:32.063788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.899 [2024-11-15 12:28:32.065169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.899 [2024-11-15 12:28:32.065176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.899 [2024-11-15 12:28:32.207855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.899 [2024-11-15 12:28:32.224094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.899 NULL1 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.899 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.157 Delay0 00:06:52.157 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.157 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.157 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.157 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.157 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.157 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=918074 00:06:52.157 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:52.157 12:28:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:52.157 [2024-11-15 12:28:32.308896] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:54.056 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:54.056 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.056 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.313 Write completed with error (sct=0, sc=8) 00:06:54.313 starting I/O failed: -6 00:06:54.313 Write completed with error (sct=0, sc=8) 00:06:54.313 Read completed with error (sct=0, sc=8) 00:06:54.313 Read completed with error (sct=0, sc=8) 00:06:54.313 Read completed with error (sct=0, sc=8) 00:06:54.313 starting I/O failed: -6 00:06:54.313 Read completed with error (sct=0, sc=8) 00:06:54.313 Read completed with error (sct=0, sc=8) 00:06:54.313 Read completed with error (sct=0, sc=8) 00:06:54.313 Read completed with error (sct=0, sc=8) 00:06:54.313 starting I/O failed: -6 00:06:54.313 Write completed with error (sct=0, sc=8) 00:06:54.313 Read completed with error (sct=0, sc=8) 00:06:54.313 Write completed with error (sct=0, sc=8) 00:06:54.313 Write completed with error (sct=0, sc=8) 00:06:54.313 starting I/O failed: -6 00:06:54.313 Read completed with error (sct=0, sc=8) 00:06:54.313 Read completed with error (sct=0, sc=8) 00:06:54.313 Write completed with error (sct=0, sc=8) 00:06:54.313 Read completed with error (sct=0, sc=8) 00:06:54.313 starting I/O failed: -6 00:06:54.313 Read completed with error (sct=0, sc=8) 00:06:54.313 Write completed with error (sct=0, sc=8) 00:06:54.313 Write completed with error (sct=0, sc=8) 00:06:54.313 Read completed with error (sct=0, sc=8) 00:06:54.313 starting I/O failed: -6 00:06:54.313 Read completed with error (sct=0, sc=8) 00:06:54.313 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 [2024-11-15 12:28:34.519287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a94a0 is same with the state(6) to be set 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 starting I/O failed: -6 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 [2024-11-15 12:28:34.520474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa36800d4b0 is same with the state(6) to be set 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Write completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:54.314 Read completed with error (sct=0, sc=8) 00:06:55.252 [2024-11-15 12:28:35.486606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16aa9a0 is same with the state(6) to be set 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 [2024-11-15 12:28:35.523513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa36800d7e0 is same with the state(6) to be set 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 [2024-11-15 12:28:35.523837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a92c0 is same with the state(6) to be set 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 [2024-11-15 12:28:35.524028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa36800d020 is same with the state(6) to be set 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 Read completed with error (sct=0, sc=8) 00:06:55.252 Write completed with error (sct=0, sc=8) 00:06:55.252 [2024-11-15 12:28:35.524183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a9680 is same with the state(6) to be set 00:06:55.252 Initializing NVMe Controllers 00:06:55.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:55.252 Controller IO queue size 128, less than required. 00:06:55.252 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:55.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:55.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:55.252 Initialization complete. Launching workers. 00:06:55.252 ======================================================== 00:06:55.252 Latency(us) 00:06:55.252 Device Information : IOPS MiB/s Average min max 00:06:55.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 160.38 0.08 925332.28 458.99 2004150.34 00:06:55.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.84 0.08 925051.72 345.09 2002987.51 00:06:55.252 ======================================================== 00:06:55.252 Total : 326.22 0.16 925189.65 345.09 2004150.34 00:06:55.252 00:06:55.252 [2024-11-15 12:28:35.525164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16aa9a0 (9): Bad file descriptor 00:06:55.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:55.252 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.252 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:55.252 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 918074 00:06:55.252 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 918074 00:06:55.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (918074) - No such process 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 918074 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 918074 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 918074 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.817 [2024-11-15 12:28:36.048106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=918483 00:06:55.817 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:55.818 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:55.818 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 918483 00:06:55.818 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:55.818 [2024-11-15 12:28:36.122382] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:56.383 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:56.383 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 918483 00:06:56.383 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:56.947 12:28:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:56.947 12:28:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 918483 00:06:56.947 12:28:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:57.512 12:28:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:57.512 12:28:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 918483 00:06:57.512 12:28:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:57.769 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:57.769 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 918483 00:06:57.769 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:58.334 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:58.334 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 918483 00:06:58.334 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:58.899 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:58.899 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 918483 00:06:58.899 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:58.899 Initializing NVMe Controllers 00:06:58.899 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:58.899 Controller IO queue size 128, less than required. 00:06:58.899 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:58.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:58.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:58.899 Initialization complete. Launching workers. 00:06:58.899 ======================================================== 00:06:58.899 Latency(us) 00:06:58.899 Device Information : IOPS MiB/s Average min max 00:06:58.899 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004377.98 1000212.01 1013140.91 00:06:58.899 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004315.32 1000218.23 1012404.45 00:06:58.899 ======================================================== 00:06:58.899 Total : 256.00 0.12 1004346.65 1000212.01 1013140.91 00:06:58.899 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 918483 00:06:59.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (918483) - No such process 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 918483 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:59.464 rmmod nvme_tcp 00:06:59.464 rmmod nvme_fabrics 00:06:59.464 rmmod nvme_keyring 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 918043 ']' 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 918043 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 918043 ']' 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 918043 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 918043 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 918043' 00:06:59.464 killing process with pid 918043 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 918043 00:06:59.464 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 918043 00:06:59.721 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:59.721 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:59.722 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:59.722 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:59.722 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:59.722 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:59.722 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:59.722 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:59.722 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:59.722 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.722 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.722 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.628 12:28:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:01.628 00:07:01.628 real 0m12.673s 00:07:01.628 user 0m28.152s 00:07:01.628 sys 0m3.024s 00:07:01.628 12:28:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.628 12:28:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.628 ************************************ 00:07:01.628 END TEST nvmf_delete_subsystem 00:07:01.628 ************************************ 00:07:01.886 12:28:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:01.886 12:28:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.886 12:28:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.886 12:28:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:01.886 ************************************ 00:07:01.886 START TEST nvmf_host_management 00:07:01.886 ************************************ 00:07:01.886 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:01.886 * Looking for test storage... 00:07:01.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.886 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.886 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.886 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.886 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.886 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.886 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.887 --rc genhtml_branch_coverage=1 00:07:01.887 --rc genhtml_function_coverage=1 00:07:01.887 --rc genhtml_legend=1 00:07:01.887 --rc geninfo_all_blocks=1 00:07:01.887 --rc geninfo_unexecuted_blocks=1 00:07:01.887 00:07:01.887 ' 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.887 --rc genhtml_branch_coverage=1 00:07:01.887 --rc genhtml_function_coverage=1 00:07:01.887 --rc genhtml_legend=1 00:07:01.887 --rc geninfo_all_blocks=1 00:07:01.887 --rc geninfo_unexecuted_blocks=1 00:07:01.887 00:07:01.887 ' 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.887 --rc genhtml_branch_coverage=1 00:07:01.887 --rc genhtml_function_coverage=1 00:07:01.887 --rc genhtml_legend=1 00:07:01.887 --rc geninfo_all_blocks=1 00:07:01.887 --rc geninfo_unexecuted_blocks=1 00:07:01.887 00:07:01.887 ' 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.887 --rc genhtml_branch_coverage=1 00:07:01.887 --rc genhtml_function_coverage=1 00:07:01.887 --rc genhtml_legend=1 00:07:01.887 --rc geninfo_all_blocks=1 00:07:01.887 --rc geninfo_unexecuted_blocks=1 00:07:01.887 00:07:01.887 ' 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.887 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:01.888 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:01.888 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:01.888 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:04.418 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:04.418 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:04.418 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:04.419 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:04.419 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:04.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:07:04.419 00:07:04.419 --- 10.0.0.2 ping statistics --- 00:07:04.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.419 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:07:04.419 00:07:04.419 --- 10.0.0.1 ping statistics --- 00:07:04.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.419 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=920956 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 920956 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 920956 ']' 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.419 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.419 [2024-11-15 12:28:44.538399] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:04.419 [2024-11-15 12:28:44.538491] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.419 [2024-11-15 12:28:44.608407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.419 [2024-11-15 12:28:44.666067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.419 [2024-11-15 12:28:44.666118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.419 [2024-11-15 12:28:44.666146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.419 [2024-11-15 12:28:44.666157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.419 [2024-11-15 12:28:44.666167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.419 [2024-11-15 12:28:44.667638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.419 [2024-11-15 12:28:44.667713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.419 [2024-11-15 12:28:44.667769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:04.419 [2024-11-15 12:28:44.667772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.677 [2024-11-15 12:28:44.815587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.677 Malloc0 00:07:04.677 [2024-11-15 12:28:44.884560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=921005 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 921005 /var/tmp/bdevperf.sock 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 921005 ']' 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:04.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.677 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:04.677 { 00:07:04.677 "params": { 00:07:04.677 "name": "Nvme$subsystem", 00:07:04.677 "trtype": "$TEST_TRANSPORT", 00:07:04.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:04.677 "adrfam": "ipv4", 00:07:04.677 "trsvcid": "$NVMF_PORT", 00:07:04.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:04.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:04.677 "hdgst": ${hdgst:-false}, 00:07:04.677 "ddgst": ${ddgst:-false} 00:07:04.677 }, 00:07:04.678 "method": "bdev_nvme_attach_controller" 00:07:04.678 } 00:07:04.678 EOF 00:07:04.678 )") 00:07:04.678 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:04.678 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:04.678 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:04.678 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:04.678 "params": { 00:07:04.678 "name": "Nvme0", 00:07:04.678 "trtype": "tcp", 00:07:04.678 "traddr": "10.0.0.2", 00:07:04.678 "adrfam": "ipv4", 00:07:04.678 "trsvcid": "4420", 00:07:04.678 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:04.678 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:04.678 "hdgst": false, 00:07:04.678 "ddgst": false 00:07:04.678 }, 00:07:04.678 "method": "bdev_nvme_attach_controller" 00:07:04.678 }' 00:07:04.678 [2024-11-15 12:28:44.969599] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:04.678 [2024-11-15 12:28:44.969676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid921005 ] 00:07:04.935 [2024-11-15 12:28:45.038417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.935 [2024-11-15 12:28:45.097801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.193 Running I/O for 10 seconds... 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:05.193 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:05.451 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:05.451 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:05.451 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:05.451 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:05.451 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.451 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:05.451 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.451 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:05.451 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:05.451 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:05.451 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:05.451 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:05.451 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:05.451 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.451 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:05.710 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.710 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:05.710 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.710 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:05.710 [2024-11-15 12:28:45.801124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.710 [2024-11-15 12:28:45.801169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.801188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.710 [2024-11-15 12:28:45.801203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.801217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.710 [2024-11-15 12:28:45.801231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.801246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.710 [2024-11-15 12:28:45.801259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.801272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1420a40 is same with the state(6) to be set 00:07:05.710 [2024-11-15 12:28:45.801599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.801625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.801652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.801668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.801684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.801698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.801713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.801739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.801756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.801773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.801788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.801812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.801828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.801842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.801858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.801872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.801887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.801901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.801916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.801930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.801945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.801959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.801974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.801988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.802003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.802017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.802032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.802045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.802060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.802074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.802088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.802102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.802117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.802130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.802147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.802161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.802180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.802194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.802209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.802224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.802238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.802252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.802267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.802280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.802295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.802309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.802323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.710 [2024-11-15 12:28:45.802337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.710 [2024-11-15 12:28:45.802352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.802981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.802995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.711 [2024-11-15 12:28:45.803505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.711 [2024-11-15 12:28:45.803519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.712 [2024-11-15 12:28:45.803534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.712 [2024-11-15 12:28:45.803548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.712 [2024-11-15 12:28:45.804742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:05.712 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:05.712 00:07:05.712 Latency(us) 00:07:05.712 [2024-11-15T11:28:46.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.712 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:05.712 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:05.712 Verification LBA range: start 0x0 length 0x400 00:07:05.712 Nvme0n1 : 0.40 1590.47 99.40 159.05 0.00 35525.81 2876.30 34369.99 00:07:05.712 [2024-11-15T11:28:46.056Z] =================================================================================================================== 00:07:05.712 [2024-11-15T11:28:46.056Z] Total : 1590.47 99.40 159.05 0.00 35525.81 2876.30 34369.99 00:07:05.712 [2024-11-15 12:28:45.806632] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.712 [2024-11-15 12:28:45.806674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1420a40 (9): Bad file descriptor 00:07:05.712 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.712 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:05.712 [2024-11-15 12:28:45.867877] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:06.749 12:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 921005 00:07:06.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (921005) - No such process 00:07:06.749 12:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:06.749 12:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:06.749 12:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:06.749 12:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:06.749 12:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:06.749 12:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:06.749 12:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:06.749 12:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:06.749 { 00:07:06.749 "params": { 00:07:06.749 "name": "Nvme$subsystem", 00:07:06.749 "trtype": "$TEST_TRANSPORT", 00:07:06.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:06.749 "adrfam": "ipv4", 00:07:06.749 "trsvcid": "$NVMF_PORT", 00:07:06.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:06.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:06.749 "hdgst": ${hdgst:-false}, 00:07:06.749 "ddgst": ${ddgst:-false} 00:07:06.749 }, 00:07:06.749 "method": "bdev_nvme_attach_controller" 00:07:06.749 } 00:07:06.749 EOF 00:07:06.749 )") 00:07:06.749 12:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:06.749 12:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:06.749 12:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:06.749 12:28:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:06.749 "params": { 00:07:06.749 "name": "Nvme0", 00:07:06.749 "trtype": "tcp", 00:07:06.749 "traddr": "10.0.0.2", 00:07:06.749 "adrfam": "ipv4", 00:07:06.749 "trsvcid": "4420", 00:07:06.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:06.749 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:06.749 "hdgst": false, 00:07:06.749 "ddgst": false 00:07:06.749 }, 00:07:06.749 "method": "bdev_nvme_attach_controller" 00:07:06.749 }' 00:07:06.749 [2024-11-15 12:28:46.861927] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:06.749 [2024-11-15 12:28:46.862026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid921283 ] 00:07:06.749 [2024-11-15 12:28:46.931540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.749 [2024-11-15 12:28:46.991642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.049 Running I/O for 1 seconds... 00:07:08.424 1664.00 IOPS, 104.00 MiB/s 00:07:08.424 Latency(us) 00:07:08.424 [2024-11-15T11:28:48.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.424 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:08.424 Verification LBA range: start 0x0 length 0x400 00:07:08.424 Nvme0n1 : 1.02 1701.69 106.36 0.00 0.00 36995.26 5267.15 33399.09 00:07:08.424 [2024-11-15T11:28:48.768Z] =================================================================================================================== 00:07:08.424 [2024-11-15T11:28:48.768Z] Total : 1701.69 106.36 0.00 0.00 36995.26 5267.15 33399.09 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:08.424 rmmod nvme_tcp 00:07:08.424 rmmod nvme_fabrics 00:07:08.424 rmmod nvme_keyring 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 920956 ']' 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 920956 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 920956 ']' 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 920956 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 920956 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 920956' 00:07:08.424 killing process with pid 920956 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 920956 00:07:08.424 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 920956 00:07:08.684 [2024-11-15 12:28:48.920454] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:08.684 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:08.684 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:08.684 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:08.684 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:08.684 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:08.684 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:08.684 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:08.684 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:08.684 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:08.684 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.684 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:08.684 12:28:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.221 12:28:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:11.221 12:28:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:11.221 00:07:11.221 real 0m8.993s 00:07:11.221 user 0m20.353s 00:07:11.221 sys 0m2.804s 00:07:11.221 12:28:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.221 12:28:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.221 ************************************ 00:07:11.221 END TEST nvmf_host_management 00:07:11.221 ************************************ 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:11.221 ************************************ 00:07:11.221 START TEST nvmf_lvol 00:07:11.221 ************************************ 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:11.221 * Looking for test storage... 00:07:11.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.221 --rc genhtml_branch_coverage=1 00:07:11.221 --rc genhtml_function_coverage=1 00:07:11.221 --rc genhtml_legend=1 00:07:11.221 --rc geninfo_all_blocks=1 00:07:11.221 --rc geninfo_unexecuted_blocks=1 00:07:11.221 00:07:11.221 ' 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.221 --rc genhtml_branch_coverage=1 00:07:11.221 --rc genhtml_function_coverage=1 00:07:11.221 --rc genhtml_legend=1 00:07:11.221 --rc geninfo_all_blocks=1 00:07:11.221 --rc geninfo_unexecuted_blocks=1 00:07:11.221 00:07:11.221 ' 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.221 --rc genhtml_branch_coverage=1 00:07:11.221 --rc genhtml_function_coverage=1 00:07:11.221 --rc genhtml_legend=1 00:07:11.221 --rc geninfo_all_blocks=1 00:07:11.221 --rc geninfo_unexecuted_blocks=1 00:07:11.221 00:07:11.221 ' 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.221 --rc genhtml_branch_coverage=1 00:07:11.221 --rc genhtml_function_coverage=1 00:07:11.221 --rc genhtml_legend=1 00:07:11.221 --rc geninfo_all_blocks=1 00:07:11.221 --rc geninfo_unexecuted_blocks=1 00:07:11.221 00:07:11.221 ' 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.221 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:11.222 12:28:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:13.129 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:13.129 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:13.129 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:13.129 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.129 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:13.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:07:13.130 00:07:13.130 --- 10.0.0.2 ping statistics --- 00:07:13.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.130 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:07:13.130 00:07:13.130 --- 10.0.0.1 ping statistics --- 00:07:13.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.130 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:13.130 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:13.388 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:13.388 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:13.388 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:13.388 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:13.388 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=923505 00:07:13.388 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:13.388 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 923505 00:07:13.388 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 923505 ']' 00:07:13.388 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.388 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.388 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.388 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.388 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:13.388 [2024-11-15 12:28:53.551068] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:13.388 [2024-11-15 12:28:53.551159] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.388 [2024-11-15 12:28:53.623475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.388 [2024-11-15 12:28:53.683604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.388 [2024-11-15 12:28:53.683657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.388 [2024-11-15 12:28:53.683685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.388 [2024-11-15 12:28:53.683697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.388 [2024-11-15 12:28:53.683707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.388 [2024-11-15 12:28:53.685267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.388 [2024-11-15 12:28:53.685333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.388 [2024-11-15 12:28:53.685337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.646 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.646 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:13.646 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:13.646 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:13.646 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:13.646 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.646 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:13.902 [2024-11-15 12:28:54.062874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.902 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:14.160 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:14.160 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:14.418 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:14.418 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:14.675 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:14.933 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d70a4065-782f-4d12-a7b0-593015c943aa 00:07:14.933 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d70a4065-782f-4d12-a7b0-593015c943aa lvol 20 00:07:15.191 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3a02b60e-d677-4ae7-a403-6e35244c4b8c 00:07:15.191 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:15.756 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3a02b60e-d677-4ae7-a403-6e35244c4b8c 00:07:15.756 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:16.014 [2024-11-15 12:28:56.297161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.015 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:16.272 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=923930 00:07:16.272 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:16.272 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:17.646 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3a02b60e-d677-4ae7-a403-6e35244c4b8c MY_SNAPSHOT 00:07:17.646 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3c1a1805-63ac-4b9e-9525-2a93b6d78872 00:07:17.646 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3a02b60e-d677-4ae7-a403-6e35244c4b8c 30 00:07:17.904 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3c1a1805-63ac-4b9e-9525-2a93b6d78872 MY_CLONE 00:07:18.470 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=53107b98-eb09-4358-be3f-b0512c7157b7 00:07:18.470 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 53107b98-eb09-4358-be3f-b0512c7157b7 00:07:19.037 12:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 923930 00:07:27.146 Initializing NVMe Controllers 00:07:27.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:27.146 Controller IO queue size 128, less than required. 00:07:27.146 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:27.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:27.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:27.146 Initialization complete. Launching workers. 00:07:27.146 ======================================================== 00:07:27.146 Latency(us) 00:07:27.146 Device Information : IOPS MiB/s Average min max 00:07:27.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10514.80 41.07 12176.29 2037.96 121478.05 00:07:27.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10254.10 40.06 12493.31 2176.22 53397.04 00:07:27.146 ======================================================== 00:07:27.146 Total : 20768.90 81.13 12332.81 2037.96 121478.05 00:07:27.146 00:07:27.146 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:27.146 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3a02b60e-d677-4ae7-a403-6e35244c4b8c 00:07:27.146 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d70a4065-782f-4d12-a7b0-593015c943aa 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:27.712 rmmod nvme_tcp 00:07:27.712 rmmod nvme_fabrics 00:07:27.712 rmmod nvme_keyring 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 923505 ']' 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 923505 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 923505 ']' 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 923505 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 923505 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 923505' 00:07:27.712 killing process with pid 923505 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 923505 00:07:27.712 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 923505 00:07:27.970 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:27.970 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:27.970 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:27.970 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:27.970 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:27.970 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:27.970 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:27.970 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:27.970 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:27.970 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.970 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.970 12:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.879 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:29.879 00:07:29.879 real 0m19.141s 00:07:29.879 user 1m5.388s 00:07:29.879 sys 0m5.495s 00:07:29.879 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.880 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.880 ************************************ 00:07:29.880 END TEST nvmf_lvol 00:07:29.880 ************************************ 00:07:29.880 12:29:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:29.880 12:29:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.880 12:29:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.880 12:29:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.139 ************************************ 00:07:30.139 START TEST nvmf_lvs_grow 00:07:30.139 ************************************ 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:30.139 * Looking for test storage... 00:07:30.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.139 --rc genhtml_branch_coverage=1 00:07:30.139 --rc genhtml_function_coverage=1 00:07:30.139 --rc genhtml_legend=1 00:07:30.139 --rc geninfo_all_blocks=1 00:07:30.139 --rc geninfo_unexecuted_blocks=1 00:07:30.139 00:07:30.139 ' 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.139 --rc genhtml_branch_coverage=1 00:07:30.139 --rc genhtml_function_coverage=1 00:07:30.139 --rc genhtml_legend=1 00:07:30.139 --rc geninfo_all_blocks=1 00:07:30.139 --rc geninfo_unexecuted_blocks=1 00:07:30.139 00:07:30.139 ' 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.139 --rc genhtml_branch_coverage=1 00:07:30.139 --rc genhtml_function_coverage=1 00:07:30.139 --rc genhtml_legend=1 00:07:30.139 --rc geninfo_all_blocks=1 00:07:30.139 --rc geninfo_unexecuted_blocks=1 00:07:30.139 00:07:30.139 ' 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.139 --rc genhtml_branch_coverage=1 00:07:30.139 --rc genhtml_function_coverage=1 00:07:30.139 --rc genhtml_legend=1 00:07:30.139 --rc geninfo_all_blocks=1 00:07:30.139 --rc geninfo_unexecuted_blocks=1 00:07:30.139 00:07:30.139 ' 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.139 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:30.140 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:32.668 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:32.668 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:32.668 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:32.668 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:32.668 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:32.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:07:32.669 00:07:32.669 --- 10.0.0.2 ping statistics --- 00:07:32.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.669 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:07:32.669 00:07:32.669 --- 10.0.0.1 ping statistics --- 00:07:32.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.669 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=927217 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 927217 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 927217 ']' 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.669 [2024-11-15 12:29:12.759692] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:32.669 [2024-11-15 12:29:12.759844] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.669 [2024-11-15 12:29:12.830195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.669 [2024-11-15 12:29:12.882847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.669 [2024-11-15 12:29:12.882907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.669 [2024-11-15 12:29:12.882936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.669 [2024-11-15 12:29:12.882947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.669 [2024-11-15 12:29:12.882957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.669 [2024-11-15 12:29:12.883528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:32.669 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.927 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.927 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:32.927 [2024-11-15 12:29:13.268056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.184 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:33.184 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.184 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.184 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.184 ************************************ 00:07:33.184 START TEST lvs_grow_clean 00:07:33.184 ************************************ 00:07:33.184 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:33.184 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:33.184 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:33.184 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:33.184 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:33.184 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:33.185 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:33.185 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:33.185 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:33.185 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:33.442 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:33.442 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:33.699 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=dd5eb009-371e-48d4-a284-86b6e33103ca 00:07:33.699 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5eb009-371e-48d4-a284-86b6e33103ca 00:07:33.699 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:33.956 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:33.956 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:33.956 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dd5eb009-371e-48d4-a284-86b6e33103ca lvol 150 00:07:34.214 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e8ca2ee7-b92a-44dc-9ce3-7aeb6b8b6474 00:07:34.214 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:34.214 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:34.472 [2024-11-15 12:29:14.698129] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:34.472 [2024-11-15 12:29:14.698229] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:34.472 true 00:07:34.472 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5eb009-371e-48d4-a284-86b6e33103ca 00:07:34.472 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:34.731 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:34.731 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:34.989 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e8ca2ee7-b92a-44dc-9ce3-7aeb6b8b6474 00:07:35.247 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:35.505 [2024-11-15 12:29:15.825526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.505 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.070 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=927657 00:07:36.070 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:36.070 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:36.070 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 927657 /var/tmp/bdevperf.sock 00:07:36.070 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 927657 ']' 00:07:36.070 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:36.070 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.070 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:36.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:36.070 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.070 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:36.070 [2024-11-15 12:29:16.152381] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:36.070 [2024-11-15 12:29:16.152451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid927657 ] 00:07:36.070 [2024-11-15 12:29:16.216192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.070 [2024-11-15 12:29:16.272331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.070 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.070 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:36.070 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:36.635 Nvme0n1 00:07:36.635 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:36.892 [ 00:07:36.892 { 00:07:36.892 "name": "Nvme0n1", 00:07:36.892 "aliases": [ 00:07:36.892 "e8ca2ee7-b92a-44dc-9ce3-7aeb6b8b6474" 00:07:36.892 ], 00:07:36.892 "product_name": "NVMe disk", 00:07:36.892 "block_size": 4096, 00:07:36.892 "num_blocks": 38912, 00:07:36.892 "uuid": "e8ca2ee7-b92a-44dc-9ce3-7aeb6b8b6474", 00:07:36.892 "numa_id": 0, 00:07:36.892 "assigned_rate_limits": { 00:07:36.892 "rw_ios_per_sec": 0, 00:07:36.892 "rw_mbytes_per_sec": 0, 00:07:36.892 "r_mbytes_per_sec": 0, 00:07:36.892 "w_mbytes_per_sec": 0 00:07:36.892 }, 00:07:36.892 "claimed": false, 00:07:36.892 "zoned": false, 00:07:36.892 "supported_io_types": { 00:07:36.892 "read": true, 00:07:36.892 "write": true, 00:07:36.892 "unmap": true, 00:07:36.892 "flush": true, 00:07:36.892 "reset": true, 00:07:36.892 "nvme_admin": true, 00:07:36.892 "nvme_io": true, 00:07:36.892 "nvme_io_md": false, 00:07:36.892 "write_zeroes": true, 00:07:36.892 "zcopy": false, 00:07:36.892 "get_zone_info": false, 00:07:36.892 "zone_management": false, 00:07:36.892 "zone_append": false, 00:07:36.892 "compare": true, 00:07:36.892 "compare_and_write": true, 00:07:36.892 "abort": true, 00:07:36.892 "seek_hole": false, 00:07:36.892 "seek_data": false, 00:07:36.892 "copy": true, 00:07:36.892 "nvme_iov_md": false 00:07:36.892 }, 00:07:36.892 "memory_domains": [ 00:07:36.892 { 00:07:36.892 "dma_device_id": "system", 00:07:36.892 "dma_device_type": 1 00:07:36.892 } 00:07:36.892 ], 00:07:36.892 "driver_specific": { 00:07:36.892 "nvme": [ 00:07:36.892 { 00:07:36.892 "trid": { 00:07:36.892 "trtype": "TCP", 00:07:36.892 "adrfam": "IPv4", 00:07:36.892 "traddr": "10.0.0.2", 00:07:36.892 "trsvcid": "4420", 00:07:36.892 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:36.892 }, 00:07:36.892 "ctrlr_data": { 00:07:36.892 "cntlid": 1, 00:07:36.892 "vendor_id": "0x8086", 00:07:36.892 "model_number": "SPDK bdev Controller", 00:07:36.892 "serial_number": "SPDK0", 00:07:36.892 "firmware_revision": "25.01", 00:07:36.892 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:36.892 "oacs": { 00:07:36.892 "security": 0, 00:07:36.892 "format": 0, 00:07:36.892 "firmware": 0, 00:07:36.892 "ns_manage": 0 00:07:36.892 }, 00:07:36.892 "multi_ctrlr": true, 00:07:36.892 "ana_reporting": false 00:07:36.892 }, 00:07:36.892 "vs": { 00:07:36.892 "nvme_version": "1.3" 00:07:36.892 }, 00:07:36.892 "ns_data": { 00:07:36.892 "id": 1, 00:07:36.892 "can_share": true 00:07:36.892 } 00:07:36.892 } 00:07:36.892 ], 00:07:36.892 "mp_policy": "active_passive" 00:07:36.892 } 00:07:36.892 } 00:07:36.892 ] 00:07:36.892 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=927793 00:07:36.892 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:36.892 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:37.149 Running I/O for 10 seconds... 00:07:38.081 Latency(us) 00:07:38.081 [2024-11-15T11:29:18.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.081 Nvme0n1 : 1.00 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:07:38.081 [2024-11-15T11:29:18.425Z] =================================================================================================================== 00:07:38.081 [2024-11-15T11:29:18.425Z] Total : 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:07:38.081 00:07:39.013 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dd5eb009-371e-48d4-a284-86b6e33103ca 00:07:39.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.013 Nvme0n1 : 2.00 15018.50 58.67 0.00 0.00 0.00 0.00 0.00 00:07:39.013 [2024-11-15T11:29:19.357Z] =================================================================================================================== 00:07:39.013 [2024-11-15T11:29:19.357Z] Total : 15018.50 58.67 0.00 0.00 0.00 0.00 0.00 00:07:39.013 00:07:39.271 true 00:07:39.271 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5eb009-371e-48d4-a284-86b6e33103ca 00:07:39.271 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:39.528 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:39.528 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:39.528 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 927793 00:07:40.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.095 Nvme0n1 : 3.00 15113.33 59.04 0.00 0.00 0.00 0.00 0.00 00:07:40.095 [2024-11-15T11:29:20.439Z] =================================================================================================================== 00:07:40.095 [2024-11-15T11:29:20.439Z] Total : 15113.33 59.04 0.00 0.00 0.00 0.00 0.00 00:07:40.095 00:07:41.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.029 Nvme0n1 : 4.00 15208.50 59.41 0.00 0.00 0.00 0.00 0.00 00:07:41.029 [2024-11-15T11:29:21.373Z] =================================================================================================================== 00:07:41.029 [2024-11-15T11:29:21.373Z] Total : 15208.50 59.41 0.00 0.00 0.00 0.00 0.00 00:07:41.029 00:07:41.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.964 Nvme0n1 : 5.00 15291.00 59.73 0.00 0.00 0.00 0.00 0.00 00:07:41.964 [2024-11-15T11:29:22.308Z] =================================================================================================================== 00:07:41.964 [2024-11-15T11:29:22.308Z] Total : 15291.00 59.73 0.00 0.00 0.00 0.00 0.00 00:07:41.964 00:07:43.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.339 Nvme0n1 : 6.00 15346.00 59.95 0.00 0.00 0.00 0.00 0.00 00:07:43.339 [2024-11-15T11:29:23.683Z] =================================================================================================================== 00:07:43.339 [2024-11-15T11:29:23.683Z] Total : 15346.00 59.95 0.00 0.00 0.00 0.00 0.00 00:07:43.339 00:07:44.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.274 Nvme0n1 : 7.00 15403.86 60.17 0.00 0.00 0.00 0.00 0.00 00:07:44.274 [2024-11-15T11:29:24.618Z] =================================================================================================================== 00:07:44.274 [2024-11-15T11:29:24.618Z] Total : 15403.86 60.17 0.00 0.00 0.00 0.00 0.00 00:07:44.274 00:07:45.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.208 Nvme0n1 : 8.00 15439.38 60.31 0.00 0.00 0.00 0.00 0.00 00:07:45.208 [2024-11-15T11:29:25.552Z] =================================================================================================================== 00:07:45.208 [2024-11-15T11:29:25.552Z] Total : 15439.38 60.31 0.00 0.00 0.00 0.00 0.00 00:07:45.208 00:07:46.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.141 Nvme0n1 : 9.00 15466.56 60.42 0.00 0.00 0.00 0.00 0.00 00:07:46.141 [2024-11-15T11:29:26.485Z] =================================================================================================================== 00:07:46.141 [2024-11-15T11:29:26.485Z] Total : 15466.56 60.42 0.00 0.00 0.00 0.00 0.00 00:07:46.141 00:07:47.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.075 Nvme0n1 : 10.00 15494.70 60.53 0.00 0.00 0.00 0.00 0.00 00:07:47.075 [2024-11-15T11:29:27.419Z] =================================================================================================================== 00:07:47.075 [2024-11-15T11:29:27.419Z] Total : 15494.70 60.53 0.00 0.00 0.00 0.00 0.00 00:07:47.075 00:07:47.075 00:07:47.075 Latency(us) 00:07:47.075 [2024-11-15T11:29:27.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.075 Nvme0n1 : 10.01 15498.52 60.54 0.00 0.00 8254.31 5072.97 17961.72 00:07:47.075 [2024-11-15T11:29:27.419Z] =================================================================================================================== 00:07:47.075 [2024-11-15T11:29:27.419Z] Total : 15498.52 60.54 0.00 0.00 8254.31 5072.97 17961.72 00:07:47.075 { 00:07:47.075 "results": [ 00:07:47.075 { 00:07:47.075 "job": "Nvme0n1", 00:07:47.075 "core_mask": "0x2", 00:07:47.075 "workload": "randwrite", 00:07:47.075 "status": "finished", 00:07:47.075 "queue_depth": 128, 00:07:47.075 "io_size": 4096, 00:07:47.075 "runtime": 10.005791, 00:07:47.075 "iops": 15498.524804285838, 00:07:47.075 "mibps": 60.541112516741556, 00:07:47.075 "io_failed": 0, 00:07:47.075 "io_timeout": 0, 00:07:47.075 "avg_latency_us": 8254.314784745731, 00:07:47.075 "min_latency_us": 5072.971851851852, 00:07:47.075 "max_latency_us": 17961.71851851852 00:07:47.075 } 00:07:47.075 ], 00:07:47.075 "core_count": 1 00:07:47.075 } 00:07:47.075 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 927657 00:07:47.075 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 927657 ']' 00:07:47.075 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 927657 00:07:47.075 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:47.075 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.075 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 927657 00:07:47.075 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:47.075 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:47.075 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 927657' 00:07:47.075 killing process with pid 927657 00:07:47.075 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 927657 00:07:47.075 Received shutdown signal, test time was about 10.000000 seconds 00:07:47.075 00:07:47.075 Latency(us) 00:07:47.075 [2024-11-15T11:29:27.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.075 [2024-11-15T11:29:27.419Z] =================================================================================================================== 00:07:47.075 [2024-11-15T11:29:27.419Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:47.075 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 927657 00:07:47.333 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:47.591 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:47.849 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5eb009-371e-48d4-a284-86b6e33103ca 00:07:47.849 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:48.107 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:48.107 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:48.107 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:48.366 [2024-11-15 12:29:28.628603] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:48.366 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5eb009-371e-48d4-a284-86b6e33103ca 00:07:48.366 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:48.366 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5eb009-371e-48d4-a284-86b6e33103ca 00:07:48.366 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:48.366 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.366 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:48.366 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.366 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:48.366 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.366 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:48.366 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:48.366 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5eb009-371e-48d4-a284-86b6e33103ca 00:07:48.624 request: 00:07:48.624 { 00:07:48.624 "uuid": "dd5eb009-371e-48d4-a284-86b6e33103ca", 00:07:48.624 "method": "bdev_lvol_get_lvstores", 00:07:48.624 "req_id": 1 00:07:48.624 } 00:07:48.624 Got JSON-RPC error response 00:07:48.624 response: 00:07:48.624 { 00:07:48.624 "code": -19, 00:07:48.624 "message": "No such device" 00:07:48.624 } 00:07:48.624 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:48.624 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:48.624 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:48.624 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:48.624 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:48.882 aio_bdev 00:07:48.882 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e8ca2ee7-b92a-44dc-9ce3-7aeb6b8b6474 00:07:48.882 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e8ca2ee7-b92a-44dc-9ce3-7aeb6b8b6474 00:07:48.882 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:48.882 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:48.882 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:48.882 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:48.882 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:49.141 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e8ca2ee7-b92a-44dc-9ce3-7aeb6b8b6474 -t 2000 00:07:49.399 [ 00:07:49.399 { 00:07:49.399 "name": "e8ca2ee7-b92a-44dc-9ce3-7aeb6b8b6474", 00:07:49.399 "aliases": [ 00:07:49.399 "lvs/lvol" 00:07:49.399 ], 00:07:49.399 "product_name": "Logical Volume", 00:07:49.399 "block_size": 4096, 00:07:49.399 "num_blocks": 38912, 00:07:49.399 "uuid": "e8ca2ee7-b92a-44dc-9ce3-7aeb6b8b6474", 00:07:49.399 "assigned_rate_limits": { 00:07:49.399 "rw_ios_per_sec": 0, 00:07:49.399 "rw_mbytes_per_sec": 0, 00:07:49.399 "r_mbytes_per_sec": 0, 00:07:49.399 "w_mbytes_per_sec": 0 00:07:49.399 }, 00:07:49.399 "claimed": false, 00:07:49.399 "zoned": false, 00:07:49.399 "supported_io_types": { 00:07:49.399 "read": true, 00:07:49.399 "write": true, 00:07:49.399 "unmap": true, 00:07:49.399 "flush": false, 00:07:49.399 "reset": true, 00:07:49.399 "nvme_admin": false, 00:07:49.399 "nvme_io": false, 00:07:49.399 "nvme_io_md": false, 00:07:49.399 "write_zeroes": true, 00:07:49.399 "zcopy": false, 00:07:49.399 "get_zone_info": false, 00:07:49.399 "zone_management": false, 00:07:49.399 "zone_append": false, 00:07:49.399 "compare": false, 00:07:49.399 "compare_and_write": false, 00:07:49.399 "abort": false, 00:07:49.399 "seek_hole": true, 00:07:49.399 "seek_data": true, 00:07:49.399 "copy": false, 00:07:49.399 "nvme_iov_md": false 00:07:49.399 }, 00:07:49.399 "driver_specific": { 00:07:49.399 "lvol": { 00:07:49.399 "lvol_store_uuid": "dd5eb009-371e-48d4-a284-86b6e33103ca", 00:07:49.399 "base_bdev": "aio_bdev", 00:07:49.399 "thin_provision": false, 00:07:49.399 "num_allocated_clusters": 38, 00:07:49.399 "snapshot": false, 00:07:49.399 "clone": false, 00:07:49.399 "esnap_clone": false 00:07:49.399 } 00:07:49.399 } 00:07:49.399 } 00:07:49.399 ] 00:07:49.657 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:49.657 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5eb009-371e-48d4-a284-86b6e33103ca 00:07:49.657 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:49.915 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:49.915 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5eb009-371e-48d4-a284-86b6e33103ca 00:07:49.915 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:50.173 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:50.173 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e8ca2ee7-b92a-44dc-9ce3-7aeb6b8b6474 00:07:50.435 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dd5eb009-371e-48d4-a284-86b6e33103ca 00:07:50.693 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.951 00:07:50.951 real 0m17.845s 00:07:50.951 user 0m17.497s 00:07:50.951 sys 0m1.723s 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:50.951 ************************************ 00:07:50.951 END TEST lvs_grow_clean 00:07:50.951 ************************************ 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:50.951 ************************************ 00:07:50.951 START TEST lvs_grow_dirty 00:07:50.951 ************************************ 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.951 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:51.210 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:51.210 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:51.468 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=57e13417-21c5-4706-99a7-3ed35cfab3be 00:07:51.468 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57e13417-21c5-4706-99a7-3ed35cfab3be 00:07:51.468 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:51.726 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:51.726 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:51.726 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 57e13417-21c5-4706-99a7-3ed35cfab3be lvol 150 00:07:51.984 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ac4567e3-9733-4a3b-94db-31534a5c3c7c 00:07:51.984 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:51.984 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:52.243 [2024-11-15 12:29:32.566052] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:52.243 [2024-11-15 12:29:32.566140] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:52.243 true 00:07:52.501 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57e13417-21c5-4706-99a7-3ed35cfab3be 00:07:52.501 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:52.759 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:52.759 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:53.017 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ac4567e3-9733-4a3b-94db-31534a5c3c7c 00:07:53.275 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:53.533 [2024-11-15 12:29:33.657339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.533 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.791 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=929843 00:07:53.792 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:53.792 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.792 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 929843 /var/tmp/bdevperf.sock 00:07:53.792 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 929843 ']' 00:07:53.792 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:53.792 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.792 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:53.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:53.792 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.792 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:53.792 [2024-11-15 12:29:33.982769] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:07:53.792 [2024-11-15 12:29:33.982853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929843 ] 00:07:53.792 [2024-11-15 12:29:34.047668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.792 [2024-11-15 12:29:34.103622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.050 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.050 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:54.050 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:54.614 Nvme0n1 00:07:54.614 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:54.614 [ 00:07:54.614 { 00:07:54.614 "name": "Nvme0n1", 00:07:54.614 "aliases": [ 00:07:54.614 "ac4567e3-9733-4a3b-94db-31534a5c3c7c" 00:07:54.614 ], 00:07:54.614 "product_name": "NVMe disk", 00:07:54.614 "block_size": 4096, 00:07:54.614 "num_blocks": 38912, 00:07:54.614 "uuid": "ac4567e3-9733-4a3b-94db-31534a5c3c7c", 00:07:54.614 "numa_id": 0, 00:07:54.614 "assigned_rate_limits": { 00:07:54.614 "rw_ios_per_sec": 0, 00:07:54.614 "rw_mbytes_per_sec": 0, 00:07:54.614 "r_mbytes_per_sec": 0, 00:07:54.614 "w_mbytes_per_sec": 0 00:07:54.614 }, 00:07:54.614 "claimed": false, 00:07:54.614 "zoned": false, 00:07:54.614 "supported_io_types": { 00:07:54.614 "read": true, 00:07:54.614 "write": true, 00:07:54.614 "unmap": true, 00:07:54.614 "flush": true, 00:07:54.614 "reset": true, 00:07:54.614 "nvme_admin": true, 00:07:54.614 "nvme_io": true, 00:07:54.614 "nvme_io_md": false, 00:07:54.614 "write_zeroes": true, 00:07:54.614 "zcopy": false, 00:07:54.614 "get_zone_info": false, 00:07:54.614 "zone_management": false, 00:07:54.614 "zone_append": false, 00:07:54.614 "compare": true, 00:07:54.614 "compare_and_write": true, 00:07:54.614 "abort": true, 00:07:54.614 "seek_hole": false, 00:07:54.614 "seek_data": false, 00:07:54.614 "copy": true, 00:07:54.614 "nvme_iov_md": false 00:07:54.614 }, 00:07:54.614 "memory_domains": [ 00:07:54.614 { 00:07:54.614 "dma_device_id": "system", 00:07:54.614 "dma_device_type": 1 00:07:54.614 } 00:07:54.614 ], 00:07:54.614 "driver_specific": { 00:07:54.614 "nvme": [ 00:07:54.614 { 00:07:54.614 "trid": { 00:07:54.614 "trtype": "TCP", 00:07:54.614 "adrfam": "IPv4", 00:07:54.614 "traddr": "10.0.0.2", 00:07:54.614 "trsvcid": "4420", 00:07:54.614 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:54.614 }, 00:07:54.614 "ctrlr_data": { 00:07:54.614 "cntlid": 1, 00:07:54.614 "vendor_id": "0x8086", 00:07:54.614 "model_number": "SPDK bdev Controller", 00:07:54.614 "serial_number": "SPDK0", 00:07:54.614 "firmware_revision": "25.01", 00:07:54.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:54.614 "oacs": { 00:07:54.614 "security": 0, 00:07:54.614 "format": 0, 00:07:54.614 "firmware": 0, 00:07:54.614 "ns_manage": 0 00:07:54.614 }, 00:07:54.614 "multi_ctrlr": true, 00:07:54.614 "ana_reporting": false 00:07:54.614 }, 00:07:54.614 "vs": { 00:07:54.614 "nvme_version": "1.3" 00:07:54.614 }, 00:07:54.614 "ns_data": { 00:07:54.614 "id": 1, 00:07:54.614 "can_share": true 00:07:54.614 } 00:07:54.614 } 00:07:54.614 ], 00:07:54.614 "mp_policy": "active_passive" 00:07:54.614 } 00:07:54.614 } 00:07:54.614 ] 00:07:54.614 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=929972 00:07:54.614 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:54.614 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:54.872 Running I/O for 10 seconds... 00:07:55.805 Latency(us) 00:07:55.805 [2024-11-15T11:29:36.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.805 Nvme0n1 : 1.00 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:07:55.805 [2024-11-15T11:29:36.149Z] =================================================================================================================== 00:07:55.805 [2024-11-15T11:29:36.149Z] Total : 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:07:55.805 00:07:56.738 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 57e13417-21c5-4706-99a7-3ed35cfab3be 00:07:56.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.738 Nvme0n1 : 2.00 15082.00 58.91 0.00 0.00 0.00 0.00 0.00 00:07:56.738 [2024-11-15T11:29:37.082Z] =================================================================================================================== 00:07:56.738 [2024-11-15T11:29:37.082Z] Total : 15082.00 58.91 0.00 0.00 0.00 0.00 0.00 00:07:56.738 00:07:56.995 true 00:07:56.995 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:56.995 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57e13417-21c5-4706-99a7-3ed35cfab3be 00:07:57.252 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:57.252 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:57.253 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 929972 00:07:57.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.819 Nvme0n1 : 3.00 15198.33 59.37 0.00 0.00 0.00 0.00 0.00 00:07:57.819 [2024-11-15T11:29:38.163Z] =================================================================================================================== 00:07:57.819 [2024-11-15T11:29:38.163Z] Total : 15198.33 59.37 0.00 0.00 0.00 0.00 0.00 00:07:57.819 00:07:58.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.753 Nvme0n1 : 4.00 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:07:58.753 [2024-11-15T11:29:39.097Z] =================================================================================================================== 00:07:58.753 [2024-11-15T11:29:39.097Z] Total : 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:07:58.753 00:08:00.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.127 Nvme0n1 : 5.00 15348.80 59.96 0.00 0.00 0.00 0.00 0.00 00:08:00.127 [2024-11-15T11:29:40.471Z] =================================================================================================================== 00:08:00.127 [2024-11-15T11:29:40.471Z] Total : 15348.80 59.96 0.00 0.00 0.00 0.00 0.00 00:08:00.127 00:08:01.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.061 Nvme0n1 : 6.00 15394.17 60.13 0.00 0.00 0.00 0.00 0.00 00:08:01.061 [2024-11-15T11:29:41.405Z] =================================================================================================================== 00:08:01.061 [2024-11-15T11:29:41.405Z] Total : 15394.17 60.13 0.00 0.00 0.00 0.00 0.00 00:08:01.061 00:08:01.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.994 Nvme0n1 : 7.00 15444.71 60.33 0.00 0.00 0.00 0.00 0.00 00:08:01.994 [2024-11-15T11:29:42.338Z] =================================================================================================================== 00:08:01.994 [2024-11-15T11:29:42.338Z] Total : 15444.71 60.33 0.00 0.00 0.00 0.00 0.00 00:08:01.994 00:08:02.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.927 Nvme0n1 : 8.00 15482.62 60.48 0.00 0.00 0.00 0.00 0.00 00:08:02.927 [2024-11-15T11:29:43.271Z] =================================================================================================================== 00:08:02.927 [2024-11-15T11:29:43.271Z] Total : 15482.62 60.48 0.00 0.00 0.00 0.00 0.00 00:08:02.927 00:08:03.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.861 Nvme0n1 : 9.00 15526.22 60.65 0.00 0.00 0.00 0.00 0.00 00:08:03.861 [2024-11-15T11:29:44.205Z] =================================================================================================================== 00:08:03.861 [2024-11-15T11:29:44.205Z] Total : 15526.22 60.65 0.00 0.00 0.00 0.00 0.00 00:08:03.861 00:08:04.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.795 Nvme0n1 : 10.00 15554.90 60.76 0.00 0.00 0.00 0.00 0.00 00:08:04.795 [2024-11-15T11:29:45.139Z] =================================================================================================================== 00:08:04.795 [2024-11-15T11:29:45.139Z] Total : 15554.90 60.76 0.00 0.00 0.00 0.00 0.00 00:08:04.795 00:08:04.795 00:08:04.795 Latency(us) 00:08:04.795 [2024-11-15T11:29:45.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.795 Nvme0n1 : 10.00 15562.19 60.79 0.00 0.00 8220.74 4393.34 19418.07 00:08:04.795 [2024-11-15T11:29:45.139Z] =================================================================================================================== 00:08:04.795 [2024-11-15T11:29:45.139Z] Total : 15562.19 60.79 0.00 0.00 8220.74 4393.34 19418.07 00:08:04.795 { 00:08:04.795 "results": [ 00:08:04.795 { 00:08:04.795 "job": "Nvme0n1", 00:08:04.795 "core_mask": "0x2", 00:08:04.795 "workload": "randwrite", 00:08:04.795 "status": "finished", 00:08:04.795 "queue_depth": 128, 00:08:04.795 "io_size": 4096, 00:08:04.795 "runtime": 10.003539, 00:08:04.795 "iops": 15562.192540060072, 00:08:04.795 "mibps": 60.78981460960966, 00:08:04.795 "io_failed": 0, 00:08:04.795 "io_timeout": 0, 00:08:04.795 "avg_latency_us": 8220.737335779995, 00:08:04.795 "min_latency_us": 4393.339259259259, 00:08:04.795 "max_latency_us": 19418.074074074073 00:08:04.795 } 00:08:04.795 ], 00:08:04.795 "core_count": 1 00:08:04.795 } 00:08:04.795 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 929843 00:08:04.795 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 929843 ']' 00:08:04.795 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 929843 00:08:04.795 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:04.795 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.795 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 929843 00:08:04.795 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:04.795 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:04.795 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 929843' 00:08:04.795 killing process with pid 929843 00:08:04.795 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 929843 00:08:04.795 Received shutdown signal, test time was about 10.000000 seconds 00:08:04.795 00:08:04.795 Latency(us) 00:08:04.795 [2024-11-15T11:29:45.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.795 [2024-11-15T11:29:45.139Z] =================================================================================================================== 00:08:04.795 [2024-11-15T11:29:45.139Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:04.795 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 929843 00:08:05.083 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.387 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:05.700 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57e13417-21c5-4706-99a7-3ed35cfab3be 00:08:05.700 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 927217 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 927217 00:08:05.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 927217 Killed "${NVMF_APP[@]}" "$@" 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=931279 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 931279 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 931279 ']' 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.963 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.963 [2024-11-15 12:29:46.232129] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:05.963 [2024-11-15 12:29:46.232221] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.220 [2024-11-15 12:29:46.307026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.220 [2024-11-15 12:29:46.363943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.220 [2024-11-15 12:29:46.363998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.220 [2024-11-15 12:29:46.364027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.221 [2024-11-15 12:29:46.364038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.221 [2024-11-15 12:29:46.364056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.221 [2024-11-15 12:29:46.364655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.221 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.221 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:06.221 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:06.221 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:06.221 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.221 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.221 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.478 [2024-11-15 12:29:46.748499] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:06.478 [2024-11-15 12:29:46.748624] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:06.478 [2024-11-15 12:29:46.748669] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:06.478 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:06.478 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ac4567e3-9733-4a3b-94db-31534a5c3c7c 00:08:06.478 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ac4567e3-9733-4a3b-94db-31534a5c3c7c 00:08:06.478 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:06.478 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:06.478 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:06.478 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:06.478 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:06.736 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ac4567e3-9733-4a3b-94db-31534a5c3c7c -t 2000 00:08:06.993 [ 00:08:06.993 { 00:08:06.993 "name": "ac4567e3-9733-4a3b-94db-31534a5c3c7c", 00:08:06.993 "aliases": [ 00:08:06.993 "lvs/lvol" 00:08:06.993 ], 00:08:06.993 "product_name": "Logical Volume", 00:08:06.993 "block_size": 4096, 00:08:06.993 "num_blocks": 38912, 00:08:06.993 "uuid": "ac4567e3-9733-4a3b-94db-31534a5c3c7c", 00:08:06.993 "assigned_rate_limits": { 00:08:06.993 "rw_ios_per_sec": 0, 00:08:06.993 "rw_mbytes_per_sec": 0, 00:08:06.993 "r_mbytes_per_sec": 0, 00:08:06.993 "w_mbytes_per_sec": 0 00:08:06.993 }, 00:08:06.993 "claimed": false, 00:08:06.993 "zoned": false, 00:08:06.993 "supported_io_types": { 00:08:06.993 "read": true, 00:08:06.993 "write": true, 00:08:06.993 "unmap": true, 00:08:06.993 "flush": false, 00:08:06.993 "reset": true, 00:08:06.993 "nvme_admin": false, 00:08:06.993 "nvme_io": false, 00:08:06.993 "nvme_io_md": false, 00:08:06.993 "write_zeroes": true, 00:08:06.993 "zcopy": false, 00:08:06.993 "get_zone_info": false, 00:08:06.993 "zone_management": false, 00:08:06.993 "zone_append": false, 00:08:06.993 "compare": false, 00:08:06.993 "compare_and_write": false, 00:08:06.993 "abort": false, 00:08:06.993 "seek_hole": true, 00:08:06.993 "seek_data": true, 00:08:06.993 "copy": false, 00:08:06.993 "nvme_iov_md": false 00:08:06.993 }, 00:08:06.993 "driver_specific": { 00:08:06.993 "lvol": { 00:08:06.993 "lvol_store_uuid": "57e13417-21c5-4706-99a7-3ed35cfab3be", 00:08:06.993 "base_bdev": "aio_bdev", 00:08:06.993 "thin_provision": false, 00:08:06.993 "num_allocated_clusters": 38, 00:08:06.993 "snapshot": false, 00:08:06.993 "clone": false, 00:08:06.993 "esnap_clone": false 00:08:06.993 } 00:08:06.993 } 00:08:06.993 } 00:08:06.993 ] 00:08:06.993 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:06.993 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57e13417-21c5-4706-99a7-3ed35cfab3be 00:08:06.993 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:07.251 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:07.251 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57e13417-21c5-4706-99a7-3ed35cfab3be 00:08:07.251 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:07.509 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:07.509 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.768 [2024-11-15 12:29:48.098204] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57e13417-21c5-4706-99a7-3ed35cfab3be 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57e13417-21c5-4706-99a7-3ed35cfab3be 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57e13417-21c5-4706-99a7-3ed35cfab3be 00:08:08.081 request: 00:08:08.081 { 00:08:08.081 "uuid": "57e13417-21c5-4706-99a7-3ed35cfab3be", 00:08:08.081 "method": "bdev_lvol_get_lvstores", 00:08:08.081 "req_id": 1 00:08:08.081 } 00:08:08.081 Got JSON-RPC error response 00:08:08.081 response: 00:08:08.081 { 00:08:08.081 "code": -19, 00:08:08.081 "message": "No such device" 00:08:08.081 } 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:08.081 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:08.339 aio_bdev 00:08:08.339 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ac4567e3-9733-4a3b-94db-31534a5c3c7c 00:08:08.339 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ac4567e3-9733-4a3b-94db-31534a5c3c7c 00:08:08.339 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.339 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:08.339 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.339 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.339 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:08.597 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ac4567e3-9733-4a3b-94db-31534a5c3c7c -t 2000 00:08:08.855 [ 00:08:08.855 { 00:08:08.855 "name": "ac4567e3-9733-4a3b-94db-31534a5c3c7c", 00:08:08.855 "aliases": [ 00:08:08.855 "lvs/lvol" 00:08:08.855 ], 00:08:08.855 "product_name": "Logical Volume", 00:08:08.855 "block_size": 4096, 00:08:08.855 "num_blocks": 38912, 00:08:08.855 "uuid": "ac4567e3-9733-4a3b-94db-31534a5c3c7c", 00:08:08.855 "assigned_rate_limits": { 00:08:08.855 "rw_ios_per_sec": 0, 00:08:08.855 "rw_mbytes_per_sec": 0, 00:08:08.855 "r_mbytes_per_sec": 0, 00:08:08.855 "w_mbytes_per_sec": 0 00:08:08.855 }, 00:08:08.855 "claimed": false, 00:08:08.855 "zoned": false, 00:08:08.855 "supported_io_types": { 00:08:08.855 "read": true, 00:08:08.855 "write": true, 00:08:08.855 "unmap": true, 00:08:08.855 "flush": false, 00:08:08.855 "reset": true, 00:08:08.855 "nvme_admin": false, 00:08:08.855 "nvme_io": false, 00:08:08.855 "nvme_io_md": false, 00:08:08.855 "write_zeroes": true, 00:08:08.855 "zcopy": false, 00:08:08.855 "get_zone_info": false, 00:08:08.855 "zone_management": false, 00:08:08.855 "zone_append": false, 00:08:08.855 "compare": false, 00:08:08.855 "compare_and_write": false, 00:08:08.855 "abort": false, 00:08:08.855 "seek_hole": true, 00:08:08.855 "seek_data": true, 00:08:08.855 "copy": false, 00:08:08.855 "nvme_iov_md": false 00:08:08.855 }, 00:08:08.855 "driver_specific": { 00:08:08.855 "lvol": { 00:08:08.855 "lvol_store_uuid": "57e13417-21c5-4706-99a7-3ed35cfab3be", 00:08:08.855 "base_bdev": "aio_bdev", 00:08:08.855 "thin_provision": false, 00:08:08.855 "num_allocated_clusters": 38, 00:08:08.855 "snapshot": false, 00:08:08.855 "clone": false, 00:08:08.855 "esnap_clone": false 00:08:08.856 } 00:08:08.856 } 00:08:08.856 } 00:08:08.856 ] 00:08:09.113 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:09.113 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57e13417-21c5-4706-99a7-3ed35cfab3be 00:08:09.113 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:09.372 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:09.372 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57e13417-21c5-4706-99a7-3ed35cfab3be 00:08:09.372 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:09.629 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:09.629 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ac4567e3-9733-4a3b-94db-31534a5c3c7c 00:08:09.888 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 57e13417-21c5-4706-99a7-3ed35cfab3be 00:08:10.146 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:10.405 00:08:10.405 real 0m19.400s 00:08:10.405 user 0m49.411s 00:08:10.405 sys 0m4.477s 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:10.405 ************************************ 00:08:10.405 END TEST lvs_grow_dirty 00:08:10.405 ************************************ 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:10.405 nvmf_trace.0 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:10.405 rmmod nvme_tcp 00:08:10.405 rmmod nvme_fabrics 00:08:10.405 rmmod nvme_keyring 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 931279 ']' 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 931279 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 931279 ']' 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 931279 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.405 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 931279 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 931279' 00:08:10.664 killing process with pid 931279 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 931279 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 931279 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.664 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:13.201 00:08:13.201 real 0m42.786s 00:08:13.201 user 1m12.934s 00:08:13.201 sys 0m8.225s 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.201 ************************************ 00:08:13.201 END TEST nvmf_lvs_grow 00:08:13.201 ************************************ 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.201 ************************************ 00:08:13.201 START TEST nvmf_bdev_io_wait 00:08:13.201 ************************************ 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:13.201 * Looking for test storage... 00:08:13.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:13.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.201 --rc genhtml_branch_coverage=1 00:08:13.201 --rc genhtml_function_coverage=1 00:08:13.201 --rc genhtml_legend=1 00:08:13.201 --rc geninfo_all_blocks=1 00:08:13.201 --rc geninfo_unexecuted_blocks=1 00:08:13.201 00:08:13.201 ' 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:13.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.201 --rc genhtml_branch_coverage=1 00:08:13.201 --rc genhtml_function_coverage=1 00:08:13.201 --rc genhtml_legend=1 00:08:13.201 --rc geninfo_all_blocks=1 00:08:13.201 --rc geninfo_unexecuted_blocks=1 00:08:13.201 00:08:13.201 ' 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:13.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.201 --rc genhtml_branch_coverage=1 00:08:13.201 --rc genhtml_function_coverage=1 00:08:13.201 --rc genhtml_legend=1 00:08:13.201 --rc geninfo_all_blocks=1 00:08:13.201 --rc geninfo_unexecuted_blocks=1 00:08:13.201 00:08:13.201 ' 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:13.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.201 --rc genhtml_branch_coverage=1 00:08:13.201 --rc genhtml_function_coverage=1 00:08:13.201 --rc genhtml_legend=1 00:08:13.201 --rc geninfo_all_blocks=1 00:08:13.201 --rc geninfo_unexecuted_blocks=1 00:08:13.201 00:08:13.201 ' 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.201 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:13.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:13.202 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.749 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:15.750 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:15.750 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:15.750 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:15.750 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:08:15.750 00:08:15.750 --- 10.0.0.2 ping statistics --- 00:08:15.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.750 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:08:15.750 00:08:15.750 --- 10.0.0.1 ping statistics --- 00:08:15.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.750 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.750 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=933865 00:08:15.751 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:15.751 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 933865 00:08:15.751 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 933865 ']' 00:08:15.751 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.751 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.751 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.751 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.751 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.751 [2024-11-15 12:29:55.717196] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:15.751 [2024-11-15 12:29:55.717273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.751 [2024-11-15 12:29:55.785847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.751 [2024-11-15 12:29:55.841988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.751 [2024-11-15 12:29:55.842044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.751 [2024-11-15 12:29:55.842070] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.751 [2024-11-15 12:29:55.842081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.751 [2024-11-15 12:29:55.842090] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.751 [2024-11-15 12:29:55.843694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.751 [2024-11-15 12:29:55.843759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.751 [2024-11-15 12:29:55.843825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.751 [2024-11-15 12:29:55.843828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.751 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.751 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:15.751 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:15.751 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:15.751 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.751 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.751 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:15.751 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.751 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.751 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.751 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:15.751 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.751 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.751 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.751 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.751 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.751 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.010 [2024-11-15 12:29:56.094477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.010 Malloc0 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.010 [2024-11-15 12:29:56.144850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=933892 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=933894 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.010 { 00:08:16.010 "params": { 00:08:16.010 "name": "Nvme$subsystem", 00:08:16.010 "trtype": "$TEST_TRANSPORT", 00:08:16.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.010 "adrfam": "ipv4", 00:08:16.010 "trsvcid": "$NVMF_PORT", 00:08:16.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.010 "hdgst": ${hdgst:-false}, 00:08:16.010 "ddgst": ${ddgst:-false} 00:08:16.010 }, 00:08:16.010 "method": "bdev_nvme_attach_controller" 00:08:16.010 } 00:08:16.010 EOF 00:08:16.010 )") 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=933896 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.010 { 00:08:16.010 "params": { 00:08:16.010 "name": "Nvme$subsystem", 00:08:16.010 "trtype": "$TEST_TRANSPORT", 00:08:16.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.010 "adrfam": "ipv4", 00:08:16.010 "trsvcid": "$NVMF_PORT", 00:08:16.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.010 "hdgst": ${hdgst:-false}, 00:08:16.010 "ddgst": ${ddgst:-false} 00:08:16.010 }, 00:08:16.010 "method": "bdev_nvme_attach_controller" 00:08:16.010 } 00:08:16.010 EOF 00:08:16.010 )") 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=933899 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.010 { 00:08:16.010 "params": { 00:08:16.010 "name": "Nvme$subsystem", 00:08:16.010 "trtype": "$TEST_TRANSPORT", 00:08:16.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.010 "adrfam": "ipv4", 00:08:16.010 "trsvcid": "$NVMF_PORT", 00:08:16.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.010 "hdgst": ${hdgst:-false}, 00:08:16.010 "ddgst": ${ddgst:-false} 00:08:16.010 }, 00:08:16.010 "method": "bdev_nvme_attach_controller" 00:08:16.010 } 00:08:16.010 EOF 00:08:16.010 )") 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:16.010 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.011 { 00:08:16.011 "params": { 00:08:16.011 "name": "Nvme$subsystem", 00:08:16.011 "trtype": "$TEST_TRANSPORT", 00:08:16.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.011 "adrfam": "ipv4", 00:08:16.011 "trsvcid": "$NVMF_PORT", 00:08:16.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.011 "hdgst": ${hdgst:-false}, 00:08:16.011 "ddgst": ${ddgst:-false} 00:08:16.011 }, 00:08:16.011 "method": "bdev_nvme_attach_controller" 00:08:16.011 } 00:08:16.011 EOF 00:08:16.011 )") 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 933892 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.011 "params": { 00:08:16.011 "name": "Nvme1", 00:08:16.011 "trtype": "tcp", 00:08:16.011 "traddr": "10.0.0.2", 00:08:16.011 "adrfam": "ipv4", 00:08:16.011 "trsvcid": "4420", 00:08:16.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:16.011 "hdgst": false, 00:08:16.011 "ddgst": false 00:08:16.011 }, 00:08:16.011 "method": "bdev_nvme_attach_controller" 00:08:16.011 }' 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.011 "params": { 00:08:16.011 "name": "Nvme1", 00:08:16.011 "trtype": "tcp", 00:08:16.011 "traddr": "10.0.0.2", 00:08:16.011 "adrfam": "ipv4", 00:08:16.011 "trsvcid": "4420", 00:08:16.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:16.011 "hdgst": false, 00:08:16.011 "ddgst": false 00:08:16.011 }, 00:08:16.011 "method": "bdev_nvme_attach_controller" 00:08:16.011 }' 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.011 "params": { 00:08:16.011 "name": "Nvme1", 00:08:16.011 "trtype": "tcp", 00:08:16.011 "traddr": "10.0.0.2", 00:08:16.011 "adrfam": "ipv4", 00:08:16.011 "trsvcid": "4420", 00:08:16.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:16.011 "hdgst": false, 00:08:16.011 "ddgst": false 00:08:16.011 }, 00:08:16.011 "method": "bdev_nvme_attach_controller" 00:08:16.011 }' 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:16.011 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.011 "params": { 00:08:16.011 "name": "Nvme1", 00:08:16.011 "trtype": "tcp", 00:08:16.011 "traddr": "10.0.0.2", 00:08:16.011 "adrfam": "ipv4", 00:08:16.011 "trsvcid": "4420", 00:08:16.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:16.011 "hdgst": false, 00:08:16.011 "ddgst": false 00:08:16.011 }, 00:08:16.011 "method": "bdev_nvme_attach_controller" 00:08:16.011 }' 00:08:16.011 [2024-11-15 12:29:56.196377] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:16.011 [2024-11-15 12:29:56.196377] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:16.011 [2024-11-15 12:29:56.196388] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:16.011 [2024-11-15 12:29:56.196389] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:16.011 [2024-11-15 12:29:56.196472] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-15 12:29:56.196472] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-15 12:29:56.196472] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-15 12:29:56.196471] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:16.011 --proc-type=auto ] 00:08:16.011 --proc-type=auto ] 00:08:16.011 --proc-type=auto ] 00:08:16.269 [2024-11-15 12:29:56.381969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.269 [2024-11-15 12:29:56.438517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:16.269 [2024-11-15 12:29:56.487364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.269 [2024-11-15 12:29:56.543217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:16.269 [2024-11-15 12:29:56.561868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.269 [2024-11-15 12:29:56.612329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:16.527 [2024-11-15 12:29:56.632622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.527 [2024-11-15 12:29:56.684270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:16.527 Running I/O for 1 seconds... 00:08:16.527 Running I/O for 1 seconds... 00:08:16.527 Running I/O for 1 seconds... 00:08:16.786 Running I/O for 1 seconds... 00:08:17.720 196512.00 IOPS, 767.62 MiB/s 00:08:17.720 Latency(us) 00:08:17.720 [2024-11-15T11:29:58.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.720 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:17.720 Nvme1n1 : 1.00 196143.99 766.19 0.00 0.00 649.01 297.34 1868.99 00:08:17.720 [2024-11-15T11:29:58.064Z] =================================================================================================================== 00:08:17.720 [2024-11-15T11:29:58.064Z] Total : 196143.99 766.19 0.00 0.00 649.01 297.34 1868.99 00:08:17.720 6520.00 IOPS, 25.47 MiB/s 00:08:17.720 Latency(us) 00:08:17.720 [2024-11-15T11:29:58.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.720 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:17.720 Nvme1n1 : 1.02 6512.53 25.44 0.00 0.00 19451.46 7815.77 27767.85 00:08:17.720 [2024-11-15T11:29:58.064Z] =================================================================================================================== 00:08:17.720 [2024-11-15T11:29:58.064Z] Total : 6512.53 25.44 0.00 0.00 19451.46 7815.77 27767.85 00:08:17.720 9151.00 IOPS, 35.75 MiB/s 00:08:17.720 Latency(us) 00:08:17.720 [2024-11-15T11:29:58.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.720 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:17.720 Nvme1n1 : 1.01 9206.87 35.96 0.00 0.00 13836.99 6699.24 26408.58 00:08:17.720 [2024-11-15T11:29:58.064Z] =================================================================================================================== 00:08:17.720 [2024-11-15T11:29:58.064Z] Total : 9206.87 35.96 0.00 0.00 13836.99 6699.24 26408.58 00:08:17.720 6188.00 IOPS, 24.17 MiB/s 00:08:17.721 Latency(us) 00:08:17.721 [2024-11-15T11:29:58.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.721 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:17.721 Nvme1n1 : 1.01 6277.00 24.52 0.00 0.00 20320.54 5315.70 44467.39 00:08:17.721 [2024-11-15T11:29:58.065Z] =================================================================================================================== 00:08:17.721 [2024-11-15T11:29:58.065Z] Total : 6277.00 24.52 0.00 0.00 20320.54 5315.70 44467.39 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 933894 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 933896 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 933899 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:17.979 rmmod nvme_tcp 00:08:17.979 rmmod nvme_fabrics 00:08:17.979 rmmod nvme_keyring 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 933865 ']' 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 933865 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 933865 ']' 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 933865 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 933865 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 933865' 00:08:17.979 killing process with pid 933865 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 933865 00:08:17.979 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 933865 00:08:18.237 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:18.237 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:18.237 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:18.237 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:18.237 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:18.237 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:18.237 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:18.237 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:18.237 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:18.237 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.237 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.237 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.148 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:20.148 00:08:20.148 real 0m7.388s 00:08:20.148 user 0m16.226s 00:08:20.148 sys 0m3.563s 00:08:20.148 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.148 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:20.148 ************************************ 00:08:20.148 END TEST nvmf_bdev_io_wait 00:08:20.148 ************************************ 00:08:20.148 12:30:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:20.148 12:30:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.148 12:30:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.148 12:30:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.407 ************************************ 00:08:20.407 START TEST nvmf_queue_depth 00:08:20.407 ************************************ 00:08:20.407 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:20.407 * Looking for test storage... 00:08:20.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.407 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:20.407 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:20.407 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:20.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.408 --rc genhtml_branch_coverage=1 00:08:20.408 --rc genhtml_function_coverage=1 00:08:20.408 --rc genhtml_legend=1 00:08:20.408 --rc geninfo_all_blocks=1 00:08:20.408 --rc geninfo_unexecuted_blocks=1 00:08:20.408 00:08:20.408 ' 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:20.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.408 --rc genhtml_branch_coverage=1 00:08:20.408 --rc genhtml_function_coverage=1 00:08:20.408 --rc genhtml_legend=1 00:08:20.408 --rc geninfo_all_blocks=1 00:08:20.408 --rc geninfo_unexecuted_blocks=1 00:08:20.408 00:08:20.408 ' 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:20.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.408 --rc genhtml_branch_coverage=1 00:08:20.408 --rc genhtml_function_coverage=1 00:08:20.408 --rc genhtml_legend=1 00:08:20.408 --rc geninfo_all_blocks=1 00:08:20.408 --rc geninfo_unexecuted_blocks=1 00:08:20.408 00:08:20.408 ' 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:20.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.408 --rc genhtml_branch_coverage=1 00:08:20.408 --rc genhtml_function_coverage=1 00:08:20.408 --rc genhtml_legend=1 00:08:20.408 --rc geninfo_all_blocks=1 00:08:20.408 --rc geninfo_unexecuted_blocks=1 00:08:20.408 00:08:20.408 ' 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:20.408 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.409 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:20.409 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:20.409 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:20.409 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.409 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.409 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.409 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:20.409 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:20.409 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:20.409 12:30:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.940 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:22.941 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:22.941 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:22.941 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:22.941 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:22.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:08:22.941 00:08:22.941 --- 10.0.0.2 ping statistics --- 00:08:22.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.941 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:08:22.941 00:08:22.941 --- 10.0.0.1 ping statistics --- 00:08:22.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.941 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:22.941 12:30:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:22.941 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:22.941 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:22.941 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:22.941 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.941 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=936243 00:08:22.941 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 936243 00:08:22.941 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:22.941 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 936243 ']' 00:08:22.941 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.941 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.941 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.941 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.941 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.941 [2024-11-15 12:30:03.060539] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:22.941 [2024-11-15 12:30:03.060605] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.941 [2024-11-15 12:30:03.135518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.941 [2024-11-15 12:30:03.194544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.941 [2024-11-15 12:30:03.194603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.941 [2024-11-15 12:30:03.194617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.941 [2024-11-15 12:30:03.194644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.941 [2024-11-15 12:30:03.194653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.941 [2024-11-15 12:30:03.195280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.200 [2024-11-15 12:30:03.345247] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.200 Malloc0 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.200 [2024-11-15 12:30:03.391674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=936381 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 936381 /var/tmp/bdevperf.sock 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 936381 ']' 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:23.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.200 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.200 [2024-11-15 12:30:03.439545] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:23.200 [2024-11-15 12:30:03.439622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936381 ] 00:08:23.200 [2024-11-15 12:30:03.507821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.459 [2024-11-15 12:30:03.567587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.459 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.459 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:23.459 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:23.459 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.459 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.717 NVMe0n1 00:08:23.717 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.717 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:23.717 Running I/O for 10 seconds... 00:08:26.024 8192.00 IOPS, 32.00 MiB/s [2024-11-15T11:30:07.303Z] 8308.50 IOPS, 32.46 MiB/s [2024-11-15T11:30:08.237Z] 8488.67 IOPS, 33.16 MiB/s [2024-11-15T11:30:09.171Z] 8449.75 IOPS, 33.01 MiB/s [2024-11-15T11:30:10.105Z] 8511.60 IOPS, 33.25 MiB/s [2024-11-15T11:30:11.480Z] 8526.17 IOPS, 33.31 MiB/s [2024-11-15T11:30:12.415Z] 8532.00 IOPS, 33.33 MiB/s [2024-11-15T11:30:13.347Z] 8569.50 IOPS, 33.47 MiB/s [2024-11-15T11:30:14.281Z] 8582.78 IOPS, 33.53 MiB/s [2024-11-15T11:30:14.281Z] 8592.80 IOPS, 33.57 MiB/s 00:08:33.937 Latency(us) 00:08:33.937 [2024-11-15T11:30:14.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.937 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:33.937 Verification LBA range: start 0x0 length 0x4000 00:08:33.937 NVMe0n1 : 10.08 8624.18 33.69 0.00 0.00 118273.91 21554.06 69905.07 00:08:33.937 [2024-11-15T11:30:14.281Z] =================================================================================================================== 00:08:33.937 [2024-11-15T11:30:14.282Z] Total : 8624.18 33.69 0.00 0.00 118273.91 21554.06 69905.07 00:08:33.938 { 00:08:33.938 "results": [ 00:08:33.938 { 00:08:33.938 "job": "NVMe0n1", 00:08:33.938 "core_mask": "0x1", 00:08:33.938 "workload": "verify", 00:08:33.938 "status": "finished", 00:08:33.938 "verify_range": { 00:08:33.938 "start": 0, 00:08:33.938 "length": 16384 00:08:33.938 }, 00:08:33.938 "queue_depth": 1024, 00:08:33.938 "io_size": 4096, 00:08:33.938 "runtime": 10.082353, 00:08:33.938 "iops": 8624.177312577729, 00:08:33.938 "mibps": 33.688192627256754, 00:08:33.938 "io_failed": 0, 00:08:33.938 "io_timeout": 0, 00:08:33.938 "avg_latency_us": 118273.91098652981, 00:08:33.938 "min_latency_us": 21554.062222222223, 00:08:33.938 "max_latency_us": 69905.06666666667 00:08:33.938 } 00:08:33.938 ], 00:08:33.938 "core_count": 1 00:08:33.938 } 00:08:33.938 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 936381 00:08:33.938 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 936381 ']' 00:08:33.938 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 936381 00:08:33.938 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:33.938 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.938 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 936381 00:08:33.938 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.938 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.938 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 936381' 00:08:33.938 killing process with pid 936381 00:08:33.938 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 936381 00:08:33.938 Received shutdown signal, test time was about 10.000000 seconds 00:08:33.938 00:08:33.938 Latency(us) 00:08:33.938 [2024-11-15T11:30:14.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.938 [2024-11-15T11:30:14.282Z] =================================================================================================================== 00:08:33.938 [2024-11-15T11:30:14.282Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:33.938 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 936381 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.196 rmmod nvme_tcp 00:08:34.196 rmmod nvme_fabrics 00:08:34.196 rmmod nvme_keyring 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 936243 ']' 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 936243 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 936243 ']' 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 936243 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 936243 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 936243' 00:08:34.196 killing process with pid 936243 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 936243 00:08:34.196 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 936243 00:08:34.456 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:34.456 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:34.456 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:34.456 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:34.456 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:34.456 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:34.456 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:34.456 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.456 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:34.456 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.456 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.456 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:37.022 00:08:37.022 real 0m16.287s 00:08:37.022 user 0m22.776s 00:08:37.022 sys 0m3.182s 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.022 ************************************ 00:08:37.022 END TEST nvmf_queue_depth 00:08:37.022 ************************************ 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.022 ************************************ 00:08:37.022 START TEST nvmf_target_multipath 00:08:37.022 ************************************ 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:37.022 * Looking for test storage... 00:08:37.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:37.022 12:30:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.022 --rc genhtml_branch_coverage=1 00:08:37.022 --rc genhtml_function_coverage=1 00:08:37.022 --rc genhtml_legend=1 00:08:37.022 --rc geninfo_all_blocks=1 00:08:37.022 --rc geninfo_unexecuted_blocks=1 00:08:37.022 00:08:37.022 ' 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.022 --rc genhtml_branch_coverage=1 00:08:37.022 --rc genhtml_function_coverage=1 00:08:37.022 --rc genhtml_legend=1 00:08:37.022 --rc geninfo_all_blocks=1 00:08:37.022 --rc geninfo_unexecuted_blocks=1 00:08:37.022 00:08:37.022 ' 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.022 --rc genhtml_branch_coverage=1 00:08:37.022 --rc genhtml_function_coverage=1 00:08:37.022 --rc genhtml_legend=1 00:08:37.022 --rc geninfo_all_blocks=1 00:08:37.022 --rc geninfo_unexecuted_blocks=1 00:08:37.022 00:08:37.022 ' 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.022 --rc genhtml_branch_coverage=1 00:08:37.022 --rc genhtml_function_coverage=1 00:08:37.022 --rc genhtml_legend=1 00:08:37.022 --rc geninfo_all_blocks=1 00:08:37.022 --rc geninfo_unexecuted_blocks=1 00:08:37.022 00:08:37.022 ' 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.022 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.023 12:30:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:38.929 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:38.929 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:38.929 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.929 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:38.930 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.930 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:39.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:08:39.190 00:08:39.190 --- 10.0.0.2 ping statistics --- 00:08:39.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.190 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:08:39.190 00:08:39.190 --- 10.0.0.1 ping statistics --- 00:08:39.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.190 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:39.190 only one NIC for nvmf test 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.190 rmmod nvme_tcp 00:08:39.190 rmmod nvme_fabrics 00:08:39.190 rmmod nvme_keyring 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.190 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:41.728 00:08:41.728 real 0m4.698s 00:08:41.728 user 0m0.927s 00:08:41.728 sys 0m1.784s 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:41.728 ************************************ 00:08:41.728 END TEST nvmf_target_multipath 00:08:41.728 ************************************ 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.728 ************************************ 00:08:41.728 START TEST nvmf_zcopy 00:08:41.728 ************************************ 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:41.728 * Looking for test storage... 00:08:41.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.728 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:41.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.729 --rc genhtml_branch_coverage=1 00:08:41.729 --rc genhtml_function_coverage=1 00:08:41.729 --rc genhtml_legend=1 00:08:41.729 --rc geninfo_all_blocks=1 00:08:41.729 --rc geninfo_unexecuted_blocks=1 00:08:41.729 00:08:41.729 ' 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:41.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.729 --rc genhtml_branch_coverage=1 00:08:41.729 --rc genhtml_function_coverage=1 00:08:41.729 --rc genhtml_legend=1 00:08:41.729 --rc geninfo_all_blocks=1 00:08:41.729 --rc geninfo_unexecuted_blocks=1 00:08:41.729 00:08:41.729 ' 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:41.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.729 --rc genhtml_branch_coverage=1 00:08:41.729 --rc genhtml_function_coverage=1 00:08:41.729 --rc genhtml_legend=1 00:08:41.729 --rc geninfo_all_blocks=1 00:08:41.729 --rc geninfo_unexecuted_blocks=1 00:08:41.729 00:08:41.729 ' 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:41.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.729 --rc genhtml_branch_coverage=1 00:08:41.729 --rc genhtml_function_coverage=1 00:08:41.729 --rc genhtml_legend=1 00:08:41.729 --rc geninfo_all_blocks=1 00:08:41.729 --rc geninfo_unexecuted_blocks=1 00:08:41.729 00:08:41.729 ' 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:41.729 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:43.635 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:43.635 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:43.635 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:43.635 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:43.635 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:43.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:08:43.636 00:08:43.636 --- 10.0.0.2 ping statistics --- 00:08:43.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.636 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:08:43.636 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:08:43.636 00:08:43.636 --- 10.0.0.1 ping statistics --- 00:08:43.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.636 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:08:43.895 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.895 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:43.895 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:43.895 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.895 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:43.895 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:43.895 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.895 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:43.895 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:43.895 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:43.895 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:43.895 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.895 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.895 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=942097 00:08:43.895 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:43.895 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 942097 00:08:43.895 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 942097 ']' 00:08:43.895 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.895 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.895 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.895 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.895 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.895 [2024-11-15 12:30:24.064963] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:43.895 [2024-11-15 12:30:24.065056] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.895 [2024-11-15 12:30:24.139083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.895 [2024-11-15 12:30:24.199263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.895 [2024-11-15 12:30:24.199316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.895 [2024-11-15 12:30:24.199345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.895 [2024-11-15 12:30:24.199357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.895 [2024-11-15 12:30:24.199367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.895 [2024-11-15 12:30:24.200028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.154 [2024-11-15 12:30:24.351365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.154 [2024-11-15 12:30:24.367570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.154 malloc0 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:44.154 { 00:08:44.154 "params": { 00:08:44.154 "name": "Nvme$subsystem", 00:08:44.154 "trtype": "$TEST_TRANSPORT", 00:08:44.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:44.154 "adrfam": "ipv4", 00:08:44.154 "trsvcid": "$NVMF_PORT", 00:08:44.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:44.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:44.154 "hdgst": ${hdgst:-false}, 00:08:44.154 "ddgst": ${ddgst:-false} 00:08:44.154 }, 00:08:44.154 "method": "bdev_nvme_attach_controller" 00:08:44.154 } 00:08:44.154 EOF 00:08:44.154 )") 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:44.154 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:44.154 "params": { 00:08:44.154 "name": "Nvme1", 00:08:44.154 "trtype": "tcp", 00:08:44.154 "traddr": "10.0.0.2", 00:08:44.154 "adrfam": "ipv4", 00:08:44.154 "trsvcid": "4420", 00:08:44.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:44.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:44.154 "hdgst": false, 00:08:44.154 "ddgst": false 00:08:44.154 }, 00:08:44.154 "method": "bdev_nvme_attach_controller" 00:08:44.154 }' 00:08:44.154 [2024-11-15 12:30:24.455563] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:44.154 [2024-11-15 12:30:24.455645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942123 ] 00:08:44.413 [2024-11-15 12:30:24.528545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.413 [2024-11-15 12:30:24.586632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.671 Running I/O for 10 seconds... 00:08:46.983 5738.00 IOPS, 44.83 MiB/s [2024-11-15T11:30:28.263Z] 5822.50 IOPS, 45.49 MiB/s [2024-11-15T11:30:29.197Z] 5824.00 IOPS, 45.50 MiB/s [2024-11-15T11:30:30.133Z] 5819.25 IOPS, 45.46 MiB/s [2024-11-15T11:30:31.068Z] 5827.00 IOPS, 45.52 MiB/s [2024-11-15T11:30:32.004Z] 5812.33 IOPS, 45.41 MiB/s [2024-11-15T11:30:32.939Z] 5821.00 IOPS, 45.48 MiB/s [2024-11-15T11:30:34.315Z] 5825.75 IOPS, 45.51 MiB/s [2024-11-15T11:30:35.249Z] 5827.78 IOPS, 45.53 MiB/s [2024-11-15T11:30:35.249Z] 5827.80 IOPS, 45.53 MiB/s 00:08:54.905 Latency(us) 00:08:54.905 [2024-11-15T11:30:35.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.905 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:54.905 Verification LBA range: start 0x0 length 0x1000 00:08:54.905 Nvme1n1 : 10.01 5832.16 45.56 0.00 0.00 21889.08 373.19 31845.64 00:08:54.905 [2024-11-15T11:30:35.249Z] =================================================================================================================== 00:08:54.905 [2024-11-15T11:30:35.249Z] Total : 5832.16 45.56 0.00 0.00 21889.08 373.19 31845.64 00:08:54.905 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=943402 00:08:54.905 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:54.905 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.905 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:54.905 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:54.905 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:54.905 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:54.905 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:54.905 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:54.905 { 00:08:54.905 "params": { 00:08:54.905 "name": "Nvme$subsystem", 00:08:54.905 "trtype": "$TEST_TRANSPORT", 00:08:54.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.905 "adrfam": "ipv4", 00:08:54.905 "trsvcid": "$NVMF_PORT", 00:08:54.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.905 "hdgst": ${hdgst:-false}, 00:08:54.905 "ddgst": ${ddgst:-false} 00:08:54.905 }, 00:08:54.905 "method": "bdev_nvme_attach_controller" 00:08:54.905 } 00:08:54.905 EOF 00:08:54.905 )") 00:08:54.905 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:54.905 [2024-11-15 12:30:35.150498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.905 [2024-11-15 12:30:35.150536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.905 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:54.905 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:54.905 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:54.905 "params": { 00:08:54.905 "name": "Nvme1", 00:08:54.905 "trtype": "tcp", 00:08:54.905 "traddr": "10.0.0.2", 00:08:54.905 "adrfam": "ipv4", 00:08:54.905 "trsvcid": "4420", 00:08:54.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:54.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:54.905 "hdgst": false, 00:08:54.905 "ddgst": false 00:08:54.905 }, 00:08:54.905 "method": "bdev_nvme_attach_controller" 00:08:54.905 }' 00:08:54.905 [2024-11-15 12:30:35.158463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.905 [2024-11-15 12:30:35.158485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.905 [2024-11-15 12:30:35.166504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.905 [2024-11-15 12:30:35.166542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.905 [2024-11-15 12:30:35.174504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.905 [2024-11-15 12:30:35.174524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.905 [2024-11-15 12:30:35.182524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.905 [2024-11-15 12:30:35.182545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.905 [2024-11-15 12:30:35.190369] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:08:54.905 [2024-11-15 12:30:35.190436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943402 ] 00:08:54.905 [2024-11-15 12:30:35.190547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.905 [2024-11-15 12:30:35.190567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.905 [2024-11-15 12:30:35.198568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.905 [2024-11-15 12:30:35.198587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.905 [2024-11-15 12:30:35.206589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.905 [2024-11-15 12:30:35.206609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.905 [2024-11-15 12:30:35.214612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.905 [2024-11-15 12:30:35.214632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.905 [2024-11-15 12:30:35.222632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.905 [2024-11-15 12:30:35.222651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.905 [2024-11-15 12:30:35.230652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.905 [2024-11-15 12:30:35.230671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.905 [2024-11-15 12:30:35.238675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.905 [2024-11-15 12:30:35.238694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.905 [2024-11-15 12:30:35.246734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.905 [2024-11-15 12:30:35.246755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.254741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.254763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.258854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.164 [2024-11-15 12:30:35.262765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.262786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.270824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.270857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.278816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.278843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.286844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.286876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.294850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.294870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.302870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.302890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.310892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.310913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.318918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.318940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.320272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.164 [2024-11-15 12:30:35.326940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.326962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.334979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.335019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.343030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.343061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.351049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.351082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.359074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.359108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.367112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.367147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.375111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.375146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.383131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.383165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.391123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.391144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.399175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.399208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.407201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.407237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.415221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.415255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.423227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.423248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.431230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.431259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.439254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.439274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.447323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.447349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.455321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.455344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.463383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.463407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.471375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.471399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.479392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.479413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.487415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.487435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.495437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.495458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.164 [2024-11-15 12:30:35.503464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.164 [2024-11-15 12:30:35.503486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.511486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.511509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.519505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.519528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.527567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.527592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.535578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.535600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.544404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.544431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.551729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.551753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 Running I/O for 5 seconds... 00:08:55.423 [2024-11-15 12:30:35.559747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.559769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.574104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.574148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.585446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.585475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.598200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.598236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.609058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.609086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.619646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.619674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.630356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.630383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.640916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.640943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.651977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.652005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.662638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.662667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.674950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.674978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.684986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.685014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.695308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.695336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.423 [2024-11-15 12:30:35.706078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.423 [2024-11-15 12:30:35.706106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.424 [2024-11-15 12:30:35.716766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.424 [2024-11-15 12:30:35.716794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.424 [2024-11-15 12:30:35.727789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.424 [2024-11-15 12:30:35.727816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.424 [2024-11-15 12:30:35.740229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.424 [2024-11-15 12:30:35.740256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.424 [2024-11-15 12:30:35.749625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.424 [2024-11-15 12:30:35.749653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.424 [2024-11-15 12:30:35.760947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.424 [2024-11-15 12:30:35.760974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.771850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.771878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.782268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.782296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.793146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.793174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.803527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.803563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.814464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.814492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.827777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.827806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.838115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.838143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.848480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.848508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.858964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.858993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.869958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.869986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.882883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.882911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.892797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.892825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.903380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.903408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.913802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.913831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.924849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.924880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.937800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.937829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.947946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.947975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.958907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.958935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.969741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.969769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.980286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.980315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:35.990970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:35.990998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:36.001648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:36.001676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:36.014577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:36.014605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.682 [2024-11-15 12:30:36.024910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.682 [2024-11-15 12:30:36.024938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.035983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.036011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.048813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.048842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.059274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.059317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.070056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.070084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.083058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.083101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.093638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.093681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.104184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.104212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.114962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.115005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.125979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.126007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.138933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.138961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.150754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.150796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.159802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.159830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.171473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.171501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.184161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.184190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.194322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.194351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.205113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.205156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.215657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.215686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.226509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.226537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.239219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.239248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.249208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.249237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.260160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.260188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.273187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.273216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.941 [2024-11-15 12:30:36.283422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.941 [2024-11-15 12:30:36.283461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.200 [2024-11-15 12:30:36.294795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.200 [2024-11-15 12:30:36.294824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.200 [2024-11-15 12:30:36.307449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.200 [2024-11-15 12:30:36.307478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.200 [2024-11-15 12:30:36.317501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.200 [2024-11-15 12:30:36.317530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.200 [2024-11-15 12:30:36.327893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.200 [2024-11-15 12:30:36.327930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.338852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.338880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.349330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.349357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.360340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.360368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.373079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.373107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.383343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.383371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.393970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.393998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.406788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.406816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.417166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.417195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.427864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.427891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.440571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.440599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.451001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.451029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.462118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.462145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.472604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.472632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.483145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.483174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.493597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.493625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.504311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.504357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.514716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.514753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.525277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.525305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.201 [2024-11-15 12:30:36.535931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.201 [2024-11-15 12:30:36.535974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.546971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.546998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.557690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.557726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 11678.00 IOPS, 91.23 MiB/s [2024-11-15T11:30:36.804Z] [2024-11-15 12:30:36.567854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.567883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.579432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.579460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.590559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.590587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.601772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.601801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.612358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.612386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.623147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.623174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.634109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.634144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.646645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.646673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.657205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.657248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.667668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.667696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.678439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.678467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.689161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.689189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.702876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.702904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.713324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.713351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.724171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.724200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.736876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.736904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.460 [2024-11-15 12:30:36.748478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.460 [2024-11-15 12:30:36.748521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-15 12:30:36.757523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-15 12:30:36.757551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-15 12:30:36.769271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-15 12:30:36.769299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-15 12:30:36.779943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-15 12:30:36.779971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-15 12:30:36.790443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-15 12:30:36.790471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.461 [2024-11-15 12:30:36.800834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.461 [2024-11-15 12:30:36.800862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.719 [2024-11-15 12:30:36.811670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.719 [2024-11-15 12:30:36.811699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.719 [2024-11-15 12:30:36.824307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.719 [2024-11-15 12:30:36.824335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.719 [2024-11-15 12:30:36.834414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.719 [2024-11-15 12:30:36.834442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.719 [2024-11-15 12:30:36.845390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.719 [2024-11-15 12:30:36.845427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.719 [2024-11-15 12:30:36.856021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.719 [2024-11-15 12:30:36.856049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.719 [2024-11-15 12:30:36.866852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.719 [2024-11-15 12:30:36.866880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.719 [2024-11-15 12:30:36.877512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.719 [2024-11-15 12:30:36.877541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.719 [2024-11-15 12:30:36.888238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.719 [2024-11-15 12:30:36.888266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.719 [2024-11-15 12:30:36.901204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.719 [2024-11-15 12:30:36.901232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.719 [2024-11-15 12:30:36.912038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.719 [2024-11-15 12:30:36.912067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.719 [2024-11-15 12:30:36.922898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.719 [2024-11-15 12:30:36.922926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.719 [2024-11-15 12:30:36.933716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.719 [2024-11-15 12:30:36.933752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.719 [2024-11-15 12:30:36.944342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.719 [2024-11-15 12:30:36.944372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.719 [2024-11-15 12:30:36.956822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.719 [2024-11-15 12:30:36.956850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.719 [2024-11-15 12:30:36.967198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.720 [2024-11-15 12:30:36.967226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.720 [2024-11-15 12:30:36.978379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.720 [2024-11-15 12:30:36.978406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.720 [2024-11-15 12:30:36.989197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.720 [2024-11-15 12:30:36.989226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.720 [2024-11-15 12:30:37.000043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.720 [2024-11-15 12:30:37.000072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.720 [2024-11-15 12:30:37.012682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.720 [2024-11-15 12:30:37.012737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.720 [2024-11-15 12:30:37.022422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.720 [2024-11-15 12:30:37.022451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.720 [2024-11-15 12:30:37.033314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.720 [2024-11-15 12:30:37.033342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.720 [2024-11-15 12:30:37.046138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.720 [2024-11-15 12:30:37.046166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.720 [2024-11-15 12:30:37.056570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.720 [2024-11-15 12:30:37.056605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.978 [2024-11-15 12:30:37.067384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.978 [2024-11-15 12:30:37.067411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.978 [2024-11-15 12:30:37.078403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.978 [2024-11-15 12:30:37.078431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.978 [2024-11-15 12:30:37.088971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.978 [2024-11-15 12:30:37.088999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.978 [2024-11-15 12:30:37.099479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.978 [2024-11-15 12:30:37.099522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.978 [2024-11-15 12:30:37.110201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.978 [2024-11-15 12:30:37.110229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.978 [2024-11-15 12:30:37.120735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.978 [2024-11-15 12:30:37.120763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.978 [2024-11-15 12:30:37.131187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.978 [2024-11-15 12:30:37.131215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.978 [2024-11-15 12:30:37.141874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.978 [2024-11-15 12:30:37.141904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.978 [2024-11-15 12:30:37.152804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.978 [2024-11-15 12:30:37.152833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.978 [2024-11-15 12:30:37.163455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.978 [2024-11-15 12:30:37.163483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.978 [2024-11-15 12:30:37.174305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.978 [2024-11-15 12:30:37.174333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.979 [2024-11-15 12:30:37.186641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.979 [2024-11-15 12:30:37.186669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.979 [2024-11-15 12:30:37.196784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.979 [2024-11-15 12:30:37.196813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.979 [2024-11-15 12:30:37.207648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.979 [2024-11-15 12:30:37.207677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.979 [2024-11-15 12:30:37.220396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.979 [2024-11-15 12:30:37.220424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.979 [2024-11-15 12:30:37.230364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.979 [2024-11-15 12:30:37.230392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.979 [2024-11-15 12:30:37.241314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.979 [2024-11-15 12:30:37.241342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.979 [2024-11-15 12:30:37.254406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.979 [2024-11-15 12:30:37.254435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.979 [2024-11-15 12:30:37.264857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.979 [2024-11-15 12:30:37.264892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.979 [2024-11-15 12:30:37.275478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.979 [2024-11-15 12:30:37.275506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.979 [2024-11-15 12:30:37.288123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.979 [2024-11-15 12:30:37.288152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.979 [2024-11-15 12:30:37.299870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.979 [2024-11-15 12:30:37.299898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.979 [2024-11-15 12:30:37.308973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.979 [2024-11-15 12:30:37.309001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.979 [2024-11-15 12:30:37.320795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.979 [2024-11-15 12:30:37.320823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.333558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.333587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.343983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.344012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.354811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.354839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.367714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.367753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.378181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.378209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.389189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.389235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.399688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.399716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.410771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.410800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.423992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.424020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.434344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.434373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.444864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.444892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.455779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.455808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.466309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.466339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.477114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.477142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.487495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.487522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.237 [2024-11-15 12:30:37.497854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.237 [2024-11-15 12:30:37.497886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-11-15 12:30:37.508474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-11-15 12:30:37.508502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-11-15 12:30:37.519421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-11-15 12:30:37.519449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-11-15 12:30:37.529873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-11-15 12:30:37.529901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-11-15 12:30:37.540080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-11-15 12:30:37.540108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-11-15 12:30:37.550727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-11-15 12:30:37.550755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-11-15 12:30:37.561148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-11-15 12:30:37.561176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 11759.00 IOPS, 91.87 MiB/s [2024-11-15T11:30:37.582Z] [2024-11-15 12:30:37.571869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-11-15 12:30:37.571897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.496 [2024-11-15 12:30:37.584064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.496 [2024-11-15 12:30:37.584093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.496 [2024-11-15 12:30:37.594076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.496 [2024-11-15 12:30:37.594104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.496 [2024-11-15 12:30:37.605254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.496 [2024-11-15 12:30:37.605282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.496 [2024-11-15 12:30:37.618208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.496 [2024-11-15 12:30:37.618237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.496 [2024-11-15 12:30:37.628278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.496 [2024-11-15 12:30:37.628306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.496 [2024-11-15 12:30:37.639278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.496 [2024-11-15 12:30:37.639306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.496 [2024-11-15 12:30:37.651677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.496 [2024-11-15 12:30:37.651705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.496 [2024-11-15 12:30:37.663246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.496 [2024-11-15 12:30:37.663274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.496 [2024-11-15 12:30:37.672076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.496 [2024-11-15 12:30:37.672104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.496 [2024-11-15 12:30:37.683927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.496 [2024-11-15 12:30:37.683955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.496 [2024-11-15 12:30:37.697032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.497 [2024-11-15 12:30:37.697059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.497 [2024-11-15 12:30:37.707225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.497 [2024-11-15 12:30:37.707253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.497 [2024-11-15 12:30:37.718372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.497 [2024-11-15 12:30:37.718399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.497 [2024-11-15 12:30:37.731480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.497 [2024-11-15 12:30:37.731508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.497 [2024-11-15 12:30:37.741714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.497 [2024-11-15 12:30:37.741750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.497 [2024-11-15 12:30:37.752086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.497 [2024-11-15 12:30:37.752114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.497 [2024-11-15 12:30:37.763346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.497 [2024-11-15 12:30:37.763373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.497 [2024-11-15 12:30:37.774051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.497 [2024-11-15 12:30:37.774092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.497 [2024-11-15 12:30:37.784879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.497 [2024-11-15 12:30:37.784907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.497 [2024-11-15 12:30:37.797541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.497 [2024-11-15 12:30:37.797569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.497 [2024-11-15 12:30:37.807703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.497 [2024-11-15 12:30:37.807739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.497 [2024-11-15 12:30:37.818490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.497 [2024-11-15 12:30:37.818518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.497 [2024-11-15 12:30:37.830711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.497 [2024-11-15 12:30:37.830748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.755 [2024-11-15 12:30:37.840865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.755 [2024-11-15 12:30:37.840893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:37.851763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:37.851791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:37.864046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:37.864074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:37.874157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:37.874185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:37.884699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:37.884738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:37.897317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:37.897345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:37.907351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:37.907379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:37.918697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:37.918733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:37.931058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:37.931086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:37.940452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:37.940480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:37.952299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:37.952327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:37.962921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:37.962950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:37.973979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:37.974008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:37.984454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:37.984482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:37.995113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:37.995141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:38.005698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:38.005735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:38.016261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:38.016289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:38.027409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:38.027437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:38.038042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:38.038070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:38.051492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:38.051520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:38.061814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:38.061864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:38.072510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:38.072538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:38.083768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:38.083797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.756 [2024-11-15 12:30:38.094636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.756 [2024-11-15 12:30:38.094671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.107265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.107293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.116708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.116745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.127986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.128014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.138680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.138708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.149872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.149900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.161018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.161046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.171820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.171849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.182985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.183013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.195827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.195855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.206216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.206244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.216758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.216786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.227586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.227614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.240201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.240229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.249887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.249915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.261034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.261063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.014 [2024-11-15 12:30:38.271241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.014 [2024-11-15 12:30:38.271269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.015 [2024-11-15 12:30:38.281989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.015 [2024-11-15 12:30:38.282017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.015 [2024-11-15 12:30:38.294585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.015 [2024-11-15 12:30:38.294614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.015 [2024-11-15 12:30:38.304079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.015 [2024-11-15 12:30:38.304115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.015 [2024-11-15 12:30:38.315696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.015 [2024-11-15 12:30:38.315732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.015 [2024-11-15 12:30:38.328464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.015 [2024-11-15 12:30:38.328491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.015 [2024-11-15 12:30:38.338523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.015 [2024-11-15 12:30:38.338551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.015 [2024-11-15 12:30:38.348835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.015 [2024-11-15 12:30:38.348863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.272 [2024-11-15 12:30:38.359626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.272 [2024-11-15 12:30:38.359655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.272 [2024-11-15 12:30:38.370382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.272 [2024-11-15 12:30:38.370411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.272 [2024-11-15 12:30:38.381289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.272 [2024-11-15 12:30:38.381316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.272 [2024-11-15 12:30:38.392198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.272 [2024-11-15 12:30:38.392227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.404905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.404933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.415453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.415482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.425996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.426024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.436399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.436428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.446949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.446991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.458201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.458229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.470930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.470959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.481256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.481285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.491921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.491950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.505364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.505392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.516046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.516098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.527093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.527122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.539856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.539884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.549705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.549743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.560333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.560372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 11779.00 IOPS, 92.02 MiB/s [2024-11-15T11:30:38.617Z] [2024-11-15 12:30:38.572958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.572986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.583087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.583115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.593947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.593989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.273 [2024-11-15 12:30:38.606846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.273 [2024-11-15 12:30:38.606875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.617304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.617333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.628232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.628261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.639004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.639032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.649996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.650024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.662902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.662930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.673188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.673216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.684030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.684058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.696427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.696455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.705262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.705290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.718665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.718694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.728736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.728764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.739638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.739680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.750423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.750451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.761306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.761334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.772141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.772169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.782830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.782858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.795665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.795710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.806096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.806124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.817066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.817094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.827952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.827980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.838613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.838641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.850984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.851012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.861294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.861339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.531 [2024-11-15 12:30:38.872118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.531 [2024-11-15 12:30:38.872146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.790 [2024-11-15 12:30:38.884806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.790 [2024-11-15 12:30:38.884835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.790 [2024-11-15 12:30:38.895172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.790 [2024-11-15 12:30:38.895200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.790 [2024-11-15 12:30:38.905618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.790 [2024-11-15 12:30:38.905646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:38.916953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:38.916997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:38.927834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:38.927862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:38.940402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:38.940430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:38.950447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:38.950475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:38.960778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:38.960806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:38.971681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:38.971709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:38.984166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:38.984195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:38.994508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:38.994552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:39.005491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:39.005534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:39.017814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:39.017842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:39.027039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:39.027068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:39.039069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:39.039096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:39.049391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:39.049419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:39.060256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:39.060284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:39.072665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:39.072694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:39.082501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:39.082529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:39.093858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:39.093886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:39.104557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:39.104585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:39.115316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:39.115344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.791 [2024-11-15 12:30:39.126068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.791 [2024-11-15 12:30:39.126096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.136818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.136846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.147565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.147593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.158700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.158736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.171446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.171474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.181910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.181938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.192647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.192675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.205363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.205407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.215471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.215500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.226150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.226178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.237162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.237190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.247549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.247592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.258434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.258462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.269816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.269854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.280551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.280579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.290986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.291014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.301416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.301444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.312318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.312346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.324725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.324765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.334103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.334131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.345685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.345728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.358040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.358068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.368100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.368127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.379548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.379576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.050 [2024-11-15 12:30:39.390190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.050 [2024-11-15 12:30:39.390218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.401309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.401337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.414322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.414350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.424737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.424765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.435437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.435465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.447614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.447643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.457091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.457119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.468215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.468243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.481247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.481275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.491671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.491699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.502356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.502385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.512962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.512990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.523984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.524013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.534611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.534654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.546978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.547006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.556363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.556398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.567687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.567736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 11789.25 IOPS, 92.10 MiB/s [2024-11-15T11:30:39.653Z] [2024-11-15 12:30:39.578356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.578384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.588964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.588992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.599788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.599816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.610311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.610339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.621010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.621039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.632012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.632046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.309 [2024-11-15 12:30:39.645660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.309 [2024-11-15 12:30:39.645688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.655883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.655912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.666985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.667014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.678008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.678036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.688944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.688972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.702068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.702096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.713789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.713832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.723100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.723130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.734770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.734799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.745623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.745652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.757038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.757080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.769868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.769919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.779810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.779838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.790639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.790667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.803212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.803240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.812823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.812851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.823454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.823482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.834304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.834332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.846449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.846478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.856185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.856213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.868135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.868162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.878746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.878774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.889394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.889421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.567 [2024-11-15 12:30:39.900529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.567 [2024-11-15 12:30:39.900557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:39.911383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:39.911411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:39.922371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:39.922399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:39.933381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:39.933410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:39.946108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:39.946136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:39.956500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:39.956528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:39.967213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:39.967241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:39.978066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:39.978103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:39.988755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:39.988782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:39.999520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:39.999549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:40.010968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:40.011004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:40.023951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:40.023985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:40.033944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:40.033974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:40.044839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:40.044868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:40.055822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:40.055857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:40.067170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:40.067213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:40.078156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:40.078184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:40.088993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:40.089021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:40.102019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:40.102048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:40.112902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:40.112944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:40.123613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:40.123642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:40.134494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:40.134537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:40.145175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:40.145218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:40.158110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:40.158138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.826 [2024-11-15 12:30:40.168609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.826 [2024-11-15 12:30:40.168637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.085 [2024-11-15 12:30:40.179448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.085 [2024-11-15 12:30:40.179476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.085 [2024-11-15 12:30:40.191953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.085 [2024-11-15 12:30:40.191981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.085 [2024-11-15 12:30:40.201342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.085 [2024-11-15 12:30:40.201370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.085 [2024-11-15 12:30:40.214283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.085 [2024-11-15 12:30:40.214311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.085 [2024-11-15 12:30:40.224688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.085 [2024-11-15 12:30:40.224725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.085 [2024-11-15 12:30:40.235537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.085 [2024-11-15 12:30:40.235565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.085 [2024-11-15 12:30:40.248289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.085 [2024-11-15 12:30:40.248317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.085 [2024-11-15 12:30:40.258766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.085 [2024-11-15 12:30:40.258795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.085 [2024-11-15 12:30:40.269470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.086 [2024-11-15 12:30:40.269498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.086 [2024-11-15 12:30:40.282241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.086 [2024-11-15 12:30:40.282269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.086 [2024-11-15 12:30:40.292420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.086 [2024-11-15 12:30:40.292448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.086 [2024-11-15 12:30:40.303334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.086 [2024-11-15 12:30:40.303362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.086 [2024-11-15 12:30:40.314344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.086 [2024-11-15 12:30:40.314373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.086 [2024-11-15 12:30:40.325614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.086 [2024-11-15 12:30:40.325658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.086 [2024-11-15 12:30:40.338800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.086 [2024-11-15 12:30:40.338828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.086 [2024-11-15 12:30:40.348833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.086 [2024-11-15 12:30:40.348861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.086 [2024-11-15 12:30:40.359702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.086 [2024-11-15 12:30:40.359740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.086 [2024-11-15 12:30:40.371086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.086 [2024-11-15 12:30:40.371113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.086 [2024-11-15 12:30:40.382184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.086 [2024-11-15 12:30:40.382212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.086 [2024-11-15 12:30:40.395072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.086 [2024-11-15 12:30:40.395100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.086 [2024-11-15 12:30:40.405474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.086 [2024-11-15 12:30:40.405502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.086 [2024-11-15 12:30:40.416335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.086 [2024-11-15 12:30:40.416364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.344 [2024-11-15 12:30:40.430969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.344 [2024-11-15 12:30:40.430999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.344 [2024-11-15 12:30:40.441909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.344 [2024-11-15 12:30:40.441938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.344 [2024-11-15 12:30:40.452518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.344 [2024-11-15 12:30:40.452546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.344 [2024-11-15 12:30:40.465252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.344 [2024-11-15 12:30:40.465281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.344 [2024-11-15 12:30:40.475557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.344 [2024-11-15 12:30:40.475585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.344 [2024-11-15 12:30:40.486602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.344 [2024-11-15 12:30:40.486630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.344 [2024-11-15 12:30:40.498856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.344 [2024-11-15 12:30:40.498884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.344 [2024-11-15 12:30:40.509275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.344 [2024-11-15 12:30:40.509303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.344 [2024-11-15 12:30:40.520402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.520432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.531850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.531878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.542843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.542871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.553412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.553440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.564223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.564251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 11773.00 IOPS, 91.98 MiB/s [2024-11-15T11:30:40.689Z] [2024-11-15 12:30:40.576341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.576369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 00:09:00.345 Latency(us) 00:09:00.345 [2024-11-15T11:30:40.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.345 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:00.345 Nvme1n1 : 5.01 11775.06 91.99 0.00 0.00 10855.66 4733.16 23592.96 00:09:00.345 [2024-11-15T11:30:40.689Z] =================================================================================================================== 00:09:00.345 [2024-11-15T11:30:40.689Z] Total : 11775.06 91.99 0.00 0.00 10855.66 4733.16 23592.96 00:09:00.345 [2024-11-15 12:30:40.580998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.581025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.589025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.589048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.597028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.597064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.605107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.605148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.613134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.613185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.621147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.621195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.629171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.629216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.637187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.637236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.645215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.645263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.653231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.653275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.661256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.661301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.669276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.669322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.677295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.677340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.345 [2024-11-15 12:30:40.685319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.345 [2024-11-15 12:30:40.685366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.603 [2024-11-15 12:30:40.693342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.603 [2024-11-15 12:30:40.693388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-11-15 12:30:40.701364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-11-15 12:30:40.701407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-11-15 12:30:40.709389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-11-15 12:30:40.709437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-11-15 12:30:40.717359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-11-15 12:30:40.717385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-11-15 12:30:40.725371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-11-15 12:30:40.725400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-11-15 12:30:40.733393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-11-15 12:30:40.733413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-11-15 12:30:40.741412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-11-15 12:30:40.741432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-11-15 12:30:40.749469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-11-15 12:30:40.749503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-11-15 12:30:40.757515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-11-15 12:30:40.757566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-11-15 12:30:40.765545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-11-15 12:30:40.765589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-11-15 12:30:40.773499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-11-15 12:30:40.773519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-11-15 12:30:40.781540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-11-15 12:30:40.781561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-11-15 12:30:40.789559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-11-15 12:30:40.789578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (943402) - No such process 00:09:00.604 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 943402 00:09:00.604 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.604 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.604 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.604 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.604 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:00.604 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.604 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.604 delay0 00:09:00.604 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.604 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:00.604 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.604 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.604 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.604 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:00.604 [2024-11-15 12:30:40.870280] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:08.808 Initializing NVMe Controllers 00:09:08.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:08.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:08.808 Initialization complete. Launching workers. 00:09:08.808 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 247, failed: 22674 00:09:08.808 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22805, failed to submit 116 00:09:08.808 success 22702, unsuccessful 103, failed 0 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.808 rmmod nvme_tcp 00:09:08.808 rmmod nvme_fabrics 00:09:08.808 rmmod nvme_keyring 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 942097 ']' 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 942097 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 942097 ']' 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 942097 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.808 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 942097 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 942097' 00:09:08.808 killing process with pid 942097 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 942097 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 942097 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.808 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.189 00:09:10.189 real 0m28.721s 00:09:10.189 user 0m41.933s 00:09:10.189 sys 0m9.062s 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:10.189 ************************************ 00:09:10.189 END TEST nvmf_zcopy 00:09:10.189 ************************************ 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:10.189 ************************************ 00:09:10.189 START TEST nvmf_nmic 00:09:10.189 ************************************ 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:10.189 * Looking for test storage... 00:09:10.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.189 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:10.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.190 --rc genhtml_branch_coverage=1 00:09:10.190 --rc genhtml_function_coverage=1 00:09:10.190 --rc genhtml_legend=1 00:09:10.190 --rc geninfo_all_blocks=1 00:09:10.190 --rc geninfo_unexecuted_blocks=1 00:09:10.190 00:09:10.190 ' 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:10.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.190 --rc genhtml_branch_coverage=1 00:09:10.190 --rc genhtml_function_coverage=1 00:09:10.190 --rc genhtml_legend=1 00:09:10.190 --rc geninfo_all_blocks=1 00:09:10.190 --rc geninfo_unexecuted_blocks=1 00:09:10.190 00:09:10.190 ' 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:10.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.190 --rc genhtml_branch_coverage=1 00:09:10.190 --rc genhtml_function_coverage=1 00:09:10.190 --rc genhtml_legend=1 00:09:10.190 --rc geninfo_all_blocks=1 00:09:10.190 --rc geninfo_unexecuted_blocks=1 00:09:10.190 00:09:10.190 ' 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:10.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.190 --rc genhtml_branch_coverage=1 00:09:10.190 --rc genhtml_function_coverage=1 00:09:10.190 --rc genhtml_legend=1 00:09:10.190 --rc geninfo_all_blocks=1 00:09:10.190 --rc geninfo_unexecuted_blocks=1 00:09:10.190 00:09:10.190 ' 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.190 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.448 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:10.448 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:10.448 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:10.448 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:10.449 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.449 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:10.449 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:10.449 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:10.449 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.449 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.449 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.449 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:10.449 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:10.449 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:10.449 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:12.352 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:12.352 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.352 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:12.353 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:12.353 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.353 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:12.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:09:12.612 00:09:12.612 --- 10.0.0.2 ping statistics --- 00:09:12.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.612 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:09:12.612 00:09:12.612 --- 10.0.0.1 ping statistics --- 00:09:12.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.612 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.612 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:12.613 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:12.613 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:12.613 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:12.613 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.613 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:12.613 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=946853 00:09:12.613 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:12.613 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 946853 00:09:12.613 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 946853 ']' 00:09:12.613 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.613 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.613 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.613 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.613 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:12.613 [2024-11-15 12:30:52.938225] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:09:12.613 [2024-11-15 12:30:52.938299] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.871 [2024-11-15 12:30:53.014464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.871 [2024-11-15 12:30:53.081169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.871 [2024-11-15 12:30:53.081217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.871 [2024-11-15 12:30:53.081245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.871 [2024-11-15 12:30:53.081258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.871 [2024-11-15 12:30:53.081268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.871 [2024-11-15 12:30:53.085738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.871 [2024-11-15 12:30:53.085814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.871 [2024-11-15 12:30:53.089779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.871 [2024-11-15 12:30:53.089784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.129 [2024-11-15 12:30:53.249910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.129 Malloc0 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.129 [2024-11-15 12:30:53.311071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:13.129 test case1: single bdev can't be used in multiple subsystems 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.129 [2024-11-15 12:30:53.334888] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:13.129 [2024-11-15 12:30:53.334919] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:13.129 [2024-11-15 12:30:53.334935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.129 request: 00:09:13.129 { 00:09:13.129 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:13.129 "namespace": { 00:09:13.129 "bdev_name": "Malloc0", 00:09:13.129 "no_auto_visible": false 00:09:13.129 }, 00:09:13.129 "method": "nvmf_subsystem_add_ns", 00:09:13.129 "req_id": 1 00:09:13.129 } 00:09:13.129 Got JSON-RPC error response 00:09:13.129 response: 00:09:13.129 { 00:09:13.129 "code": -32602, 00:09:13.129 "message": "Invalid parameters" 00:09:13.129 } 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:13.129 Adding namespace failed - expected result. 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:13.129 test case2: host connect to nvmf target in multiple paths 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.129 [2024-11-15 12:30:53.343030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.129 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:13.695 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:14.261 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:14.261 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:14.261 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:14.261 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:14.261 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:16.789 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:16.789 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:16.789 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:16.789 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:16.789 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:16.789 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:16.789 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:16.789 [global] 00:09:16.789 thread=1 00:09:16.789 invalidate=1 00:09:16.789 rw=write 00:09:16.789 time_based=1 00:09:16.789 runtime=1 00:09:16.789 ioengine=libaio 00:09:16.789 direct=1 00:09:16.789 bs=4096 00:09:16.789 iodepth=1 00:09:16.789 norandommap=0 00:09:16.789 numjobs=1 00:09:16.789 00:09:16.789 verify_dump=1 00:09:16.789 verify_backlog=512 00:09:16.789 verify_state_save=0 00:09:16.789 do_verify=1 00:09:16.789 verify=crc32c-intel 00:09:16.789 [job0] 00:09:16.789 filename=/dev/nvme0n1 00:09:16.789 Could not set queue depth (nvme0n1) 00:09:16.789 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.789 fio-3.35 00:09:16.789 Starting 1 thread 00:09:17.724 00:09:17.724 job0: (groupid=0, jobs=1): err= 0: pid=947494: Fri Nov 15 12:30:57 2024 00:09:17.724 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:09:17.724 slat (nsec): min=17877, max=35447, avg=30315.45, stdev=7168.99 00:09:17.724 clat (usec): min=267, max=42053, avg=39604.28, stdev=8801.21 00:09:17.724 lat (usec): min=286, max=42072, avg=39634.60, stdev=8803.80 00:09:17.724 clat percentiles (usec): 00:09:17.724 | 1.00th=[ 269], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:17.724 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:09:17.724 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:17.724 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:17.724 | 99.99th=[42206] 00:09:17.724 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:09:17.724 slat (usec): min=7, max=29328, avg=75.63, stdev=1295.35 00:09:17.724 clat (usec): min=140, max=283, avg=184.94, stdev=17.68 00:09:17.724 lat (usec): min=148, max=29526, avg=260.57, stdev=1296.12 00:09:17.724 clat percentiles (usec): 00:09:17.724 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 169], 00:09:17.724 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 192], 00:09:17.724 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 204], 95.00th=[ 210], 00:09:17.724 | 99.00th=[ 223], 99.50th=[ 237], 99.90th=[ 285], 99.95th=[ 285], 00:09:17.724 | 99.99th=[ 285] 00:09:17.724 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:17.724 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:17.724 lat (usec) : 250=95.51%, 500=0.56% 00:09:17.724 lat (msec) : 50=3.93% 00:09:17.724 cpu : usr=0.89%, sys=0.99%, ctx=537, majf=0, minf=1 00:09:17.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.724 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.724 00:09:17.724 Run status group 0 (all jobs): 00:09:17.724 READ: bw=87.3KiB/s (89.4kB/s), 87.3KiB/s-87.3KiB/s (89.4kB/s-89.4kB/s), io=88.0KiB (90.1kB), run=1008-1008msec 00:09:17.724 WRITE: bw=2032KiB/s (2081kB/s), 2032KiB/s-2032KiB/s (2081kB/s-2081kB/s), io=2048KiB (2097kB), run=1008-1008msec 00:09:17.724 00:09:17.724 Disk stats (read/write): 00:09:17.724 nvme0n1: ios=45/512, merge=0/0, ticks=1743/89, in_queue=1832, util=98.80% 00:09:17.724 12:30:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:17.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:17.724 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:17.724 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:17.724 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:17.724 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.983 rmmod nvme_tcp 00:09:17.983 rmmod nvme_fabrics 00:09:17.983 rmmod nvme_keyring 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 946853 ']' 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 946853 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 946853 ']' 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 946853 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 946853 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 946853' 00:09:17.983 killing process with pid 946853 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 946853 00:09:17.983 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 946853 00:09:18.243 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:18.243 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:18.243 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:18.243 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:18.243 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:18.243 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:18.243 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:18.243 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:18.243 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:18.243 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.243 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.243 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.155 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:20.155 00:09:20.155 real 0m10.112s 00:09:20.155 user 0m22.366s 00:09:20.155 sys 0m2.460s 00:09:20.155 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.155 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:20.155 ************************************ 00:09:20.155 END TEST nvmf_nmic 00:09:20.155 ************************************ 00:09:20.414 12:31:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:20.414 12:31:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:20.414 12:31:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.414 12:31:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.414 ************************************ 00:09:20.414 START TEST nvmf_fio_target 00:09:20.414 ************************************ 00:09:20.414 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:20.414 * Looking for test storage... 00:09:20.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.414 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:20.414 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:20.414 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:20.414 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:20.414 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.414 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.414 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.414 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:20.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.415 --rc genhtml_branch_coverage=1 00:09:20.415 --rc genhtml_function_coverage=1 00:09:20.415 --rc genhtml_legend=1 00:09:20.415 --rc geninfo_all_blocks=1 00:09:20.415 --rc geninfo_unexecuted_blocks=1 00:09:20.415 00:09:20.415 ' 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:20.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.415 --rc genhtml_branch_coverage=1 00:09:20.415 --rc genhtml_function_coverage=1 00:09:20.415 --rc genhtml_legend=1 00:09:20.415 --rc geninfo_all_blocks=1 00:09:20.415 --rc geninfo_unexecuted_blocks=1 00:09:20.415 00:09:20.415 ' 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:20.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.415 --rc genhtml_branch_coverage=1 00:09:20.415 --rc genhtml_function_coverage=1 00:09:20.415 --rc genhtml_legend=1 00:09:20.415 --rc geninfo_all_blocks=1 00:09:20.415 --rc geninfo_unexecuted_blocks=1 00:09:20.415 00:09:20.415 ' 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:20.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.415 --rc genhtml_branch_coverage=1 00:09:20.415 --rc genhtml_function_coverage=1 00:09:20.415 --rc genhtml_legend=1 00:09:20.415 --rc geninfo_all_blocks=1 00:09:20.415 --rc geninfo_unexecuted_blocks=1 00:09:20.415 00:09:20.415 ' 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:20.415 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.416 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.416 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.416 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:20.416 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:20.416 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:20.416 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.951 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.951 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:22.951 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:22.951 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:22.951 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:22.951 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:22.951 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:22.951 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:22.951 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:22.951 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:22.952 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:22.952 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:22.952 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:22.952 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:22.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:09:22.952 00:09:22.952 --- 10.0.0.2 ping statistics --- 00:09:22.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.952 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:09:22.952 00:09:22.952 --- 10.0.0.1 ping statistics --- 00:09:22.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.952 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:22.952 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:22.952 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:22.952 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:22.953 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:22.953 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.953 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=949583 00:09:22.953 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:22.953 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 949583 00:09:22.953 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 949583 ']' 00:09:22.953 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.953 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.953 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.953 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.953 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.953 [2024-11-15 12:31:03.061311] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:09:22.953 [2024-11-15 12:31:03.061389] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.953 [2024-11-15 12:31:03.134911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.953 [2024-11-15 12:31:03.192131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.953 [2024-11-15 12:31:03.192180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.953 [2024-11-15 12:31:03.192211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.953 [2024-11-15 12:31:03.192222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.953 [2024-11-15 12:31:03.192232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.953 [2024-11-15 12:31:03.193786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.953 [2024-11-15 12:31:03.193874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.953 [2024-11-15 12:31:03.193810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.953 [2024-11-15 12:31:03.193878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.211 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.211 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:23.211 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:23.211 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.211 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.211 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.211 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:23.468 [2024-11-15 12:31:03.646175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.468 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.726 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:23.726 12:31:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.983 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:23.983 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.548 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:24.548 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.806 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:24.806 12:31:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:25.064 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.322 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:25.322 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.581 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:25.581 12:31:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.837 12:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:25.837 12:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:26.094 12:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:26.352 12:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:26.352 12:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:26.610 12:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:26.610 12:31:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:26.867 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.125 [2024-11-15 12:31:07.343821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.125 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:27.384 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:27.641 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:28.207 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:28.207 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:28.207 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:28.207 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:28.207 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:28.207 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:30.735 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:30.735 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:30.735 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.735 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:30.735 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.735 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:30.735 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:30.735 [global] 00:09:30.735 thread=1 00:09:30.735 invalidate=1 00:09:30.735 rw=write 00:09:30.735 time_based=1 00:09:30.735 runtime=1 00:09:30.735 ioengine=libaio 00:09:30.735 direct=1 00:09:30.735 bs=4096 00:09:30.735 iodepth=1 00:09:30.735 norandommap=0 00:09:30.735 numjobs=1 00:09:30.735 00:09:30.735 verify_dump=1 00:09:30.735 verify_backlog=512 00:09:30.735 verify_state_save=0 00:09:30.735 do_verify=1 00:09:30.735 verify=crc32c-intel 00:09:30.735 [job0] 00:09:30.735 filename=/dev/nvme0n1 00:09:30.735 [job1] 00:09:30.735 filename=/dev/nvme0n2 00:09:30.735 [job2] 00:09:30.735 filename=/dev/nvme0n3 00:09:30.735 [job3] 00:09:30.735 filename=/dev/nvme0n4 00:09:30.735 Could not set queue depth (nvme0n1) 00:09:30.735 Could not set queue depth (nvme0n2) 00:09:30.735 Could not set queue depth (nvme0n3) 00:09:30.735 Could not set queue depth (nvme0n4) 00:09:30.735 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.735 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.735 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.735 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.735 fio-3.35 00:09:30.735 Starting 4 threads 00:09:31.668 00:09:31.668 job0: (groupid=0, jobs=1): err= 0: pid=950664: Fri Nov 15 12:31:11 2024 00:09:31.668 read: IOPS=1013, BW=4055KiB/s (4152kB/s)(4148KiB/1023msec) 00:09:31.668 slat (nsec): min=7872, max=50450, avg=16372.50, stdev=3428.46 00:09:31.668 clat (usec): min=203, max=41982, avg=654.59, stdev=4001.84 00:09:31.668 lat (usec): min=212, max=41996, avg=670.96, stdev=4002.03 00:09:31.668 clat percentiles (usec): 00:09:31.668 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:09:31.668 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:09:31.668 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 318], 95.00th=[ 351], 00:09:31.668 | 99.00th=[ 545], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:09:31.668 | 99.99th=[42206] 00:09:31.668 write: IOPS=1501, BW=6006KiB/s (6150kB/s)(6144KiB/1023msec); 0 zone resets 00:09:31.668 slat (nsec): min=7870, max=52338, avg=18149.63, stdev=6280.99 00:09:31.668 clat (usec): min=125, max=337, avg=186.36, stdev=35.37 00:09:31.668 lat (usec): min=136, max=358, avg=204.51, stdev=37.99 00:09:31.668 clat percentiles (usec): 00:09:31.668 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 151], 00:09:31.668 | 30.00th=[ 165], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 188], 00:09:31.668 | 70.00th=[ 210], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 245], 00:09:31.668 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 302], 99.95th=[ 338], 00:09:31.668 | 99.99th=[ 338] 00:09:31.668 bw ( KiB/s): min= 4096, max= 8192, per=34.67%, avg=6144.00, stdev=2896.31, samples=2 00:09:31.668 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:09:31.668 lat (usec) : 250=80.06%, 500=19.20%, 750=0.35% 00:09:31.668 lat (msec) : 50=0.39% 00:09:31.668 cpu : usr=3.91%, sys=5.09%, ctx=2574, majf=0, minf=1 00:09:31.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.668 issued rwts: total=1037,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.668 job1: (groupid=0, jobs=1): err= 0: pid=950665: Fri Nov 15 12:31:11 2024 00:09:31.668 read: IOPS=625, BW=2500KiB/s (2560kB/s)(2600KiB/1040msec) 00:09:31.668 slat (nsec): min=3920, max=31664, avg=6940.47, stdev=5275.50 00:09:31.668 clat (usec): min=150, max=42174, avg=1287.82, stdev=6617.62 00:09:31.668 lat (usec): min=155, max=42202, avg=1294.76, stdev=6620.33 00:09:31.668 clat percentiles (usec): 00:09:31.668 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:09:31.668 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:09:31.668 | 70.00th=[ 231], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 293], 00:09:31.668 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:31.668 | 99.99th=[42206] 00:09:31.668 write: IOPS=984, BW=3938KiB/s (4033kB/s)(4096KiB/1040msec); 0 zone resets 00:09:31.668 slat (nsec): min=5175, max=37812, avg=9704.83, stdev=4907.69 00:09:31.668 clat (usec): min=116, max=413, avg=180.09, stdev=51.26 00:09:31.668 lat (usec): min=122, max=429, avg=189.80, stdev=53.92 00:09:31.668 clat percentiles (usec): 00:09:31.668 | 1.00th=[ 119], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 128], 00:09:31.668 | 30.00th=[ 133], 40.00th=[ 139], 50.00th=[ 200], 60.00th=[ 210], 00:09:31.668 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 241], 95.00th=[ 251], 00:09:31.668 | 99.00th=[ 285], 99.50th=[ 351], 99.90th=[ 412], 99.95th=[ 412], 00:09:31.668 | 99.99th=[ 412] 00:09:31.668 bw ( KiB/s): min= 4096, max= 4096, per=23.11%, avg=4096.00, stdev= 0.00, samples=2 00:09:31.668 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:09:31.668 lat (usec) : 250=89.19%, 500=9.80% 00:09:31.668 lat (msec) : 50=1.02% 00:09:31.668 cpu : usr=0.77%, sys=1.35%, ctx=1674, majf=0, minf=2 00:09:31.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.668 issued rwts: total=650,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.669 job2: (groupid=0, jobs=1): err= 0: pid=950666: Fri Nov 15 12:31:11 2024 00:09:31.669 read: IOPS=1122, BW=4489KiB/s (4596kB/s)(4592KiB/1023msec) 00:09:31.669 slat (nsec): min=6824, max=43768, avg=13329.92, stdev=3251.83 00:09:31.669 clat (usec): min=175, max=41977, avg=606.05, stdev=3992.10 00:09:31.669 lat (usec): min=186, max=41996, avg=619.38, stdev=3992.29 00:09:31.669 clat percentiles (usec): 00:09:31.669 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:09:31.669 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:09:31.669 | 70.00th=[ 215], 80.00th=[ 229], 90.00th=[ 247], 95.00th=[ 277], 00:09:31.669 | 99.00th=[ 523], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:09:31.669 | 99.99th=[42206] 00:09:31.669 write: IOPS=1501, BW=6006KiB/s (6150kB/s)(6144KiB/1023msec); 0 zone resets 00:09:31.669 slat (nsec): min=6589, max=42180, avg=15913.10, stdev=4620.95 00:09:31.669 clat (usec): min=132, max=375, avg=180.38, stdev=32.80 00:09:31.669 lat (usec): min=141, max=392, avg=196.30, stdev=32.81 00:09:31.669 clat percentiles (usec): 00:09:31.669 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:09:31.669 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 188], 00:09:31.669 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 227], 95.00th=[ 243], 00:09:31.669 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 334], 99.95th=[ 375], 00:09:31.669 | 99.99th=[ 375] 00:09:31.669 bw ( KiB/s): min= 1816, max=10472, per=34.67%, avg=6144.00, stdev=6120.72, samples=2 00:09:31.669 iops : min= 454, max= 2618, avg=1536.00, stdev=1530.18, samples=2 00:09:31.669 lat (usec) : 250=95.45%, 500=4.06%, 750=0.07% 00:09:31.669 lat (msec) : 50=0.41% 00:09:31.669 cpu : usr=2.74%, sys=3.33%, ctx=2686, majf=0, minf=1 00:09:31.669 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.669 issued rwts: total=1148,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.669 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.669 job3: (groupid=0, jobs=1): err= 0: pid=950667: Fri Nov 15 12:31:11 2024 00:09:31.669 read: IOPS=491, BW=1966KiB/s (2013kB/s)(1968KiB/1001msec) 00:09:31.669 slat (nsec): min=4646, max=53006, avg=11246.99, stdev=7480.55 00:09:31.669 clat (usec): min=164, max=42222, avg=1753.34, stdev=7795.71 00:09:31.669 lat (usec): min=169, max=42227, avg=1764.59, stdev=7798.00 00:09:31.669 clat percentiles (usec): 00:09:31.669 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:09:31.669 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 219], 00:09:31.669 | 70.00th=[ 235], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 355], 00:09:31.669 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:31.669 | 99.99th=[42206] 00:09:31.669 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:31.669 slat (nsec): min=6320, max=40132, avg=15553.93, stdev=4678.89 00:09:31.669 clat (usec): min=179, max=404, avg=234.86, stdev=19.86 00:09:31.669 lat (usec): min=203, max=427, avg=250.41, stdev=18.29 00:09:31.669 clat percentiles (usec): 00:09:31.669 | 1.00th=[ 196], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 221], 00:09:31.669 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 235], 00:09:31.669 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 269], 00:09:31.669 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 404], 99.95th=[ 404], 00:09:31.669 | 99.99th=[ 404] 00:09:31.669 bw ( KiB/s): min= 4096, max= 4096, per=23.11%, avg=4096.00, stdev= 0.00, samples=1 00:09:31.669 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:31.669 lat (usec) : 250=79.68%, 500=18.23%, 750=0.10% 00:09:31.669 lat (msec) : 2=0.10%, 10=0.10%, 50=1.79% 00:09:31.669 cpu : usr=0.60%, sys=1.40%, ctx=1005, majf=0, minf=1 00:09:31.669 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.669 issued rwts: total=492,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.669 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.669 00:09:31.669 Run status group 0 (all jobs): 00:09:31.669 READ: bw=12.5MiB/s (13.1MB/s), 1966KiB/s-4489KiB/s (2013kB/s-4596kB/s), io=13.0MiB (13.6MB), run=1001-1040msec 00:09:31.669 WRITE: bw=17.3MiB/s (18.1MB/s), 2046KiB/s-6006KiB/s (2095kB/s-6150kB/s), io=18.0MiB (18.9MB), run=1001-1040msec 00:09:31.669 00:09:31.669 Disk stats (read/write): 00:09:31.669 nvme0n1: ios=1083/1536, merge=0/0, ticks=1286/267, in_queue=1553, util=98.00% 00:09:31.669 nvme0n2: ios=307/512, merge=0/0, ticks=724/114, in_queue=838, util=86.67% 00:09:31.669 nvme0n3: ios=1170/1536, merge=0/0, ticks=1465/263, in_queue=1728, util=98.64% 00:09:31.669 nvme0n4: ios=533/512, merge=0/0, ticks=1659/115, in_queue=1774, util=98.42% 00:09:31.669 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:31.669 [global] 00:09:31.669 thread=1 00:09:31.669 invalidate=1 00:09:31.669 rw=randwrite 00:09:31.669 time_based=1 00:09:31.669 runtime=1 00:09:31.669 ioengine=libaio 00:09:31.669 direct=1 00:09:31.669 bs=4096 00:09:31.669 iodepth=1 00:09:31.669 norandommap=0 00:09:31.669 numjobs=1 00:09:31.669 00:09:31.669 verify_dump=1 00:09:31.669 verify_backlog=512 00:09:31.669 verify_state_save=0 00:09:31.669 do_verify=1 00:09:31.669 verify=crc32c-intel 00:09:31.669 [job0] 00:09:31.669 filename=/dev/nvme0n1 00:09:31.669 [job1] 00:09:31.669 filename=/dev/nvme0n2 00:09:31.669 [job2] 00:09:31.669 filename=/dev/nvme0n3 00:09:31.669 [job3] 00:09:31.669 filename=/dev/nvme0n4 00:09:31.927 Could not set queue depth (nvme0n1) 00:09:31.927 Could not set queue depth (nvme0n2) 00:09:31.927 Could not set queue depth (nvme0n3) 00:09:31.927 Could not set queue depth (nvme0n4) 00:09:31.927 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.927 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.927 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.927 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.927 fio-3.35 00:09:31.927 Starting 4 threads 00:09:33.300 00:09:33.300 job0: (groupid=0, jobs=1): err= 0: pid=950893: Fri Nov 15 12:31:13 2024 00:09:33.300 read: IOPS=2192, BW=8771KiB/s (8982kB/s)(8780KiB/1001msec) 00:09:33.300 slat (nsec): min=4498, max=65519, avg=8555.70, stdev=4724.76 00:09:33.300 clat (usec): min=172, max=1126, avg=214.53, stdev=44.78 00:09:33.300 lat (usec): min=177, max=1134, avg=223.09, stdev=46.11 00:09:33.300 clat percentiles (usec): 00:09:33.300 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:09:33.300 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:09:33.300 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 241], 95.00th=[ 273], 00:09:33.300 | 99.00th=[ 347], 99.50th=[ 449], 99.90th=[ 775], 99.95th=[ 824], 00:09:33.300 | 99.99th=[ 1123] 00:09:33.300 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:33.300 slat (nsec): min=6594, max=79617, avg=13193.39, stdev=7015.73 00:09:33.300 clat (usec): min=127, max=926, avg=180.36, stdev=52.06 00:09:33.300 lat (usec): min=135, max=938, avg=193.55, stdev=55.71 00:09:33.300 clat percentiles (usec): 00:09:33.300 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:09:33.300 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:09:33.300 | 70.00th=[ 182], 80.00th=[ 196], 90.00th=[ 225], 95.00th=[ 289], 00:09:33.300 | 99.00th=[ 392], 99.50th=[ 412], 99.90th=[ 490], 99.95th=[ 506], 00:09:33.300 | 99.99th=[ 930] 00:09:33.300 bw ( KiB/s): min=10200, max=10200, per=36.25%, avg=10200.00, stdev= 0.00, samples=1 00:09:33.300 iops : min= 2550, max= 2550, avg=2550.00, stdev= 0.00, samples=1 00:09:33.300 lat (usec) : 250=91.82%, 500=7.99%, 750=0.08%, 1000=0.08% 00:09:33.300 lat (msec) : 2=0.02% 00:09:33.300 cpu : usr=3.10%, sys=5.50%, ctx=4755, majf=0, minf=1 00:09:33.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.300 issued rwts: total=2195,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.300 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.300 job1: (groupid=0, jobs=1): err= 0: pid=950894: Fri Nov 15 12:31:13 2024 00:09:33.300 read: IOPS=20, BW=81.6KiB/s (83.6kB/s)(84.0KiB/1029msec) 00:09:33.300 slat (nsec): min=14547, max=34732, avg=21132.48, stdev=7708.84 00:09:33.300 clat (usec): min=40614, max=41941, avg=40997.30, stdev=241.53 00:09:33.300 lat (usec): min=40632, max=41957, avg=41018.43, stdev=240.09 00:09:33.300 clat percentiles (usec): 00:09:33.300 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:33.300 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:33.300 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:33.300 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:33.300 | 99.99th=[41681] 00:09:33.300 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:09:33.300 slat (nsec): min=8047, max=72530, avg=23283.72, stdev=11589.93 00:09:33.300 clat (usec): min=137, max=513, avg=296.56, stdev=79.27 00:09:33.300 lat (usec): min=165, max=540, avg=319.84, stdev=78.73 00:09:33.300 clat percentiles (usec): 00:09:33.300 | 1.00th=[ 153], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 223], 00:09:33.300 | 30.00th=[ 243], 40.00th=[ 273], 50.00th=[ 293], 60.00th=[ 322], 00:09:33.300 | 70.00th=[ 343], 80.00th=[ 371], 90.00th=[ 404], 95.00th=[ 433], 00:09:33.300 | 99.00th=[ 469], 99.50th=[ 486], 99.90th=[ 515], 99.95th=[ 515], 00:09:33.300 | 99.99th=[ 515] 00:09:33.300 bw ( KiB/s): min= 4096, max= 4096, per=14.56%, avg=4096.00, stdev= 0.00, samples=1 00:09:33.300 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:33.300 lat (usec) : 250=32.46%, 500=63.23%, 750=0.38% 00:09:33.300 lat (msec) : 50=3.94% 00:09:33.300 cpu : usr=0.58%, sys=1.17%, ctx=536, majf=0, minf=1 00:09:33.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.300 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.300 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.300 job2: (groupid=0, jobs=1): err= 0: pid=950895: Fri Nov 15 12:31:13 2024 00:09:33.300 read: IOPS=1610, BW=6442KiB/s (6596kB/s)(6448KiB/1001msec) 00:09:33.300 slat (nsec): min=6017, max=65368, avg=11912.02, stdev=7032.65 00:09:33.300 clat (usec): min=205, max=615, avg=281.60, stdev=72.71 00:09:33.300 lat (usec): min=214, max=635, avg=293.52, stdev=76.62 00:09:33.300 clat percentiles (usec): 00:09:33.300 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:09:33.300 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:09:33.300 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 379], 95.00th=[ 482], 00:09:33.300 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 603], 99.95th=[ 619], 00:09:33.300 | 99.99th=[ 619] 00:09:33.300 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:33.300 slat (nsec): min=9242, max=76362, avg=17367.95, stdev=8420.70 00:09:33.300 clat (usec): min=166, max=569, avg=232.63, stdev=65.02 00:09:33.300 lat (usec): min=178, max=595, avg=249.99, stdev=68.70 00:09:33.300 clat percentiles (usec): 00:09:33.300 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:09:33.300 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:09:33.300 | 70.00th=[ 227], 80.00th=[ 251], 90.00th=[ 334], 95.00th=[ 396], 00:09:33.300 | 99.00th=[ 453], 99.50th=[ 490], 99.90th=[ 537], 99.95th=[ 553], 00:09:33.300 | 99.99th=[ 570] 00:09:33.300 bw ( KiB/s): min= 8192, max= 8192, per=29.12%, avg=8192.00, stdev= 0.00, samples=1 00:09:33.300 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:33.300 lat (usec) : 250=58.88%, 500=39.02%, 750=2.10% 00:09:33.300 cpu : usr=4.00%, sys=7.20%, ctx=3661, majf=0, minf=1 00:09:33.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.300 issued rwts: total=1612,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.300 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.300 job3: (groupid=0, jobs=1): err= 0: pid=950896: Fri Nov 15 12:31:13 2024 00:09:33.300 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:33.300 slat (nsec): min=7065, max=49227, avg=12194.95, stdev=5223.43 00:09:33.300 clat (usec): min=202, max=1110, avg=256.02, stdev=42.86 00:09:33.300 lat (usec): min=210, max=1126, avg=268.22, stdev=45.23 00:09:33.300 clat percentiles (usec): 00:09:33.300 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 235], 00:09:33.300 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:09:33.300 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:09:33.300 | 99.00th=[ 424], 99.50th=[ 529], 99.90th=[ 791], 99.95th=[ 938], 00:09:33.300 | 99.99th=[ 1106] 00:09:33.300 write: IOPS=2115, BW=8464KiB/s (8667kB/s)(8472KiB/1001msec); 0 zone resets 00:09:33.300 slat (nsec): min=7380, max=66786, avg=14061.80, stdev=6909.25 00:09:33.300 clat (usec): min=151, max=404, avg=191.37, stdev=28.08 00:09:33.300 lat (usec): min=159, max=449, avg=205.43, stdev=31.67 00:09:33.300 clat percentiles (usec): 00:09:33.300 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:09:33.300 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 192], 00:09:33.300 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 239], 00:09:33.300 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 383], 99.95th=[ 388], 00:09:33.300 | 99.99th=[ 404] 00:09:33.300 bw ( KiB/s): min= 8192, max= 8192, per=29.12%, avg=8192.00, stdev= 0.00, samples=1 00:09:33.300 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:33.300 lat (usec) : 250=72.47%, 500=27.24%, 750=0.19%, 1000=0.07% 00:09:33.300 lat (msec) : 2=0.02% 00:09:33.300 cpu : usr=3.90%, sys=7.60%, ctx=4167, majf=0, minf=1 00:09:33.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.300 issued rwts: total=2048,2118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.300 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.300 00:09:33.300 Run status group 0 (all jobs): 00:09:33.301 READ: bw=22.3MiB/s (23.4MB/s), 81.6KiB/s-8771KiB/s (83.6kB/s-8982kB/s), io=23.0MiB (24.1MB), run=1001-1029msec 00:09:33.301 WRITE: bw=27.5MiB/s (28.8MB/s), 1990KiB/s-9.99MiB/s (2038kB/s-10.5MB/s), io=28.3MiB (29.6MB), run=1001-1029msec 00:09:33.301 00:09:33.301 Disk stats (read/write): 00:09:33.301 nvme0n1: ios=2097/2048, merge=0/0, ticks=430/339, in_queue=769, util=86.67% 00:09:33.301 nvme0n2: ios=41/512, merge=0/0, ticks=1643/145, in_queue=1788, util=98.38% 00:09:33.301 nvme0n3: ios=1579/1540, merge=0/0, ticks=928/340, in_queue=1268, util=99.06% 00:09:33.301 nvme0n4: ios=1654/2048, merge=0/0, ticks=1374/367, in_queue=1741, util=98.43% 00:09:33.301 12:31:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:33.301 [global] 00:09:33.301 thread=1 00:09:33.301 invalidate=1 00:09:33.301 rw=write 00:09:33.301 time_based=1 00:09:33.301 runtime=1 00:09:33.301 ioengine=libaio 00:09:33.301 direct=1 00:09:33.301 bs=4096 00:09:33.301 iodepth=128 00:09:33.301 norandommap=0 00:09:33.301 numjobs=1 00:09:33.301 00:09:33.301 verify_dump=1 00:09:33.301 verify_backlog=512 00:09:33.301 verify_state_save=0 00:09:33.301 do_verify=1 00:09:33.301 verify=crc32c-intel 00:09:33.301 [job0] 00:09:33.301 filename=/dev/nvme0n1 00:09:33.301 [job1] 00:09:33.301 filename=/dev/nvme0n2 00:09:33.301 [job2] 00:09:33.301 filename=/dev/nvme0n3 00:09:33.301 [job3] 00:09:33.301 filename=/dev/nvme0n4 00:09:33.301 Could not set queue depth (nvme0n1) 00:09:33.301 Could not set queue depth (nvme0n2) 00:09:33.301 Could not set queue depth (nvme0n3) 00:09:33.301 Could not set queue depth (nvme0n4) 00:09:33.559 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.559 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.559 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.559 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.559 fio-3.35 00:09:33.559 Starting 4 threads 00:09:34.933 00:09:34.933 job0: (groupid=0, jobs=1): err= 0: pid=951148: Fri Nov 15 12:31:14 2024 00:09:34.933 read: IOPS=2923, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1005msec) 00:09:34.933 slat (usec): min=3, max=22375, avg=157.35, stdev=1082.43 00:09:34.933 clat (usec): min=2348, max=49568, avg=17658.94, stdev=8021.39 00:09:34.933 lat (usec): min=5887, max=49581, avg=17816.29, stdev=8109.47 00:09:34.933 clat percentiles (usec): 00:09:34.933 | 1.00th=[ 6718], 5.00th=[10814], 10.00th=[11994], 20.00th=[12387], 00:09:34.933 | 30.00th=[13042], 40.00th=[13435], 50.00th=[14222], 60.00th=[14877], 00:09:34.933 | 70.00th=[20055], 80.00th=[22414], 90.00th=[26870], 95.00th=[38011], 00:09:34.933 | 99.00th=[45351], 99.50th=[47449], 99.90th=[49546], 99.95th=[49546], 00:09:34.933 | 99.99th=[49546] 00:09:34.933 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:09:34.933 slat (usec): min=4, max=28475, avg=166.08, stdev=1032.92 00:09:34.933 clat (usec): min=2841, max=66169, avg=24594.43, stdev=14463.71 00:09:34.933 lat (usec): min=2848, max=66193, avg=24760.52, stdev=14550.26 00:09:34.933 clat percentiles (usec): 00:09:34.933 | 1.00th=[ 4686], 5.00th=[ 6980], 10.00th=[10552], 20.00th=[11994], 00:09:34.933 | 30.00th=[19530], 40.00th=[20841], 50.00th=[22152], 60.00th=[22676], 00:09:34.933 | 70.00th=[25297], 80.00th=[28967], 90.00th=[51119], 95.00th=[61604], 00:09:34.933 | 99.00th=[65799], 99.50th=[65799], 99.90th=[66323], 99.95th=[66323], 00:09:34.933 | 99.99th=[66323] 00:09:34.933 bw ( KiB/s): min=10384, max=14192, per=18.86%, avg=12288.00, stdev=2692.66, samples=2 00:09:34.933 iops : min= 2596, max= 3548, avg=3072.00, stdev=673.17, samples=2 00:09:34.933 lat (msec) : 4=0.22%, 10=5.76%, 20=44.73%, 50=44.14%, 100=5.16% 00:09:34.933 cpu : usr=3.29%, sys=5.98%, ctx=319, majf=0, minf=1 00:09:34.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:34.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.933 issued rwts: total=2938,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.933 job1: (groupid=0, jobs=1): err= 0: pid=951168: Fri Nov 15 12:31:14 2024 00:09:34.933 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:09:34.933 slat (usec): min=3, max=18886, avg=148.99, stdev=1009.25 00:09:34.933 clat (usec): min=6236, max=39867, avg=17594.93, stdev=6951.26 00:09:34.933 lat (usec): min=6245, max=39884, avg=17743.92, stdev=7017.65 00:09:34.933 clat percentiles (usec): 00:09:34.933 | 1.00th=[ 7046], 5.00th=[10552], 10.00th=[11600], 20.00th=[12125], 00:09:34.933 | 30.00th=[12387], 40.00th=[13042], 50.00th=[14091], 60.00th=[18220], 00:09:34.933 | 70.00th=[21890], 80.00th=[22152], 90.00th=[28181], 95.00th=[32113], 00:09:34.933 | 99.00th=[37487], 99.50th=[38536], 99.90th=[40109], 99.95th=[40109], 00:09:34.933 | 99.99th=[40109] 00:09:34.933 write: IOPS=2777, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1013msec); 0 zone resets 00:09:34.933 slat (usec): min=3, max=17002, avg=208.23, stdev=1056.36 00:09:34.933 clat (usec): min=1535, max=112985, avg=29643.36, stdev=20394.37 00:09:34.933 lat (usec): min=1542, max=112992, avg=29851.58, stdev=20479.51 00:09:34.933 clat percentiles (msec): 00:09:34.933 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 19], 00:09:34.933 | 30.00th=[ 22], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 26], 00:09:34.933 | 70.00th=[ 26], 80.00th=[ 40], 90.00th=[ 61], 95.00th=[ 71], 00:09:34.933 | 99.00th=[ 107], 99.50th=[ 112], 99.90th=[ 113], 99.95th=[ 113], 00:09:34.933 | 99.99th=[ 113] 00:09:34.933 bw ( KiB/s): min=10176, max=11320, per=16.50%, avg=10748.00, stdev=808.93, samples=2 00:09:34.933 iops : min= 2544, max= 2830, avg=2687.00, stdev=202.23, samples=2 00:09:34.933 lat (msec) : 2=0.22%, 4=0.17%, 10=5.71%, 20=35.23%, 50=50.60% 00:09:34.933 lat (msec) : 100=7.05%, 250=1.02% 00:09:34.933 cpu : usr=3.85%, sys=4.35%, ctx=322, majf=0, minf=1 00:09:34.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:34.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.933 issued rwts: total=2560,2814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.933 job2: (groupid=0, jobs=1): err= 0: pid=951203: Fri Nov 15 12:31:14 2024 00:09:34.933 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec) 00:09:34.933 slat (usec): min=3, max=11089, avg=102.48, stdev=695.15 00:09:34.933 clat (usec): min=4527, max=23118, avg=12902.99, stdev=3269.86 00:09:34.933 lat (usec): min=4535, max=23135, avg=13005.48, stdev=3307.12 00:09:34.933 clat percentiles (usec): 00:09:34.933 | 1.00th=[ 5080], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10945], 00:09:34.933 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:09:34.933 | 70.00th=[13042], 80.00th=[15401], 90.00th=[18220], 95.00th=[20055], 00:09:34.933 | 99.00th=[22152], 99.50th=[22676], 99.90th=[22938], 99.95th=[23200], 00:09:34.933 | 99.99th=[23200] 00:09:34.933 write: IOPS=5434, BW=21.2MiB/s (22.3MB/s)(21.5MiB/1011msec); 0 zone resets 00:09:34.933 slat (usec): min=4, max=10080, avg=77.08, stdev=383.66 00:09:34.933 clat (usec): min=1347, max=23077, avg=11297.01, stdev=2559.41 00:09:34.933 lat (usec): min=1357, max=23087, avg=11374.09, stdev=2593.27 00:09:34.933 clat percentiles (usec): 00:09:34.933 | 1.00th=[ 3752], 5.00th=[ 5604], 10.00th=[ 7111], 20.00th=[10290], 00:09:34.933 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:09:34.933 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12780], 95.00th=[12911], 00:09:34.933 | 99.00th=[20055], 99.50th=[22152], 99.90th=[22938], 99.95th=[22938], 00:09:34.933 | 99.99th=[23200] 00:09:34.933 bw ( KiB/s): min=21296, max=21640, per=32.95%, avg=21468.00, stdev=243.24, samples=2 00:09:34.933 iops : min= 5324, max= 5410, avg=5367.00, stdev=60.81, samples=2 00:09:34.933 lat (msec) : 2=0.05%, 4=0.53%, 10=13.93%, 20=82.37%, 50=3.12% 00:09:34.933 cpu : usr=6.34%, sys=10.50%, ctx=628, majf=0, minf=1 00:09:34.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:34.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.933 issued rwts: total=5120,5494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.933 job3: (groupid=0, jobs=1): err= 0: pid=951215: Fri Nov 15 12:31:14 2024 00:09:34.933 read: IOPS=5015, BW=19.6MiB/s (20.5MB/s)(19.7MiB/1003msec) 00:09:34.933 slat (usec): min=2, max=7539, avg=98.94, stdev=569.41 00:09:34.933 clat (usec): min=650, max=21181, avg=12326.12, stdev=1686.98 00:09:34.933 lat (usec): min=4105, max=21185, avg=12425.06, stdev=1747.00 00:09:34.933 clat percentiles (usec): 00:09:34.933 | 1.00th=[ 7373], 5.00th=[ 9241], 10.00th=[11076], 20.00th=[11863], 00:09:34.933 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:09:34.933 | 70.00th=[12518], 80.00th=[13042], 90.00th=[14615], 95.00th=[15401], 00:09:34.933 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17695], 99.95th=[18220], 00:09:34.933 | 99.99th=[21103] 00:09:34.933 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:34.933 slat (usec): min=3, max=6993, avg=92.11, stdev=543.06 00:09:34.933 clat (usec): min=6557, max=26227, avg=12658.79, stdev=2243.47 00:09:34.933 lat (usec): min=6563, max=26238, avg=12750.90, stdev=2277.34 00:09:34.933 clat percentiles (usec): 00:09:34.933 | 1.00th=[ 7504], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[11863], 00:09:34.933 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:09:34.933 | 70.00th=[13042], 80.00th=[13435], 90.00th=[14222], 95.00th=[16057], 00:09:34.933 | 99.00th=[22676], 99.50th=[23200], 99.90th=[23987], 99.95th=[25822], 00:09:34.933 | 99.99th=[26346] 00:09:34.933 bw ( KiB/s): min=20480, max=20480, per=31.43%, avg=20480.00, stdev= 0.00, samples=2 00:09:34.934 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:09:34.934 lat (usec) : 750=0.01% 00:09:34.934 lat (msec) : 10=7.88%, 20=91.16%, 50=0.95% 00:09:34.934 cpu : usr=3.59%, sys=5.79%, ctx=433, majf=0, minf=2 00:09:34.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:34.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.934 issued rwts: total=5031,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.934 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.934 00:09:34.934 Run status group 0 (all jobs): 00:09:34.934 READ: bw=60.3MiB/s (63.3MB/s), 9.87MiB/s-19.8MiB/s (10.4MB/s-20.7MB/s), io=61.1MiB (64.1MB), run=1003-1013msec 00:09:34.934 WRITE: bw=63.6MiB/s (66.7MB/s), 10.9MiB/s-21.2MiB/s (11.4MB/s-22.3MB/s), io=64.5MiB (67.6MB), run=1003-1013msec 00:09:34.934 00:09:34.934 Disk stats (read/write): 00:09:34.934 nvme0n1: ios=2601/2671, merge=0/0, ticks=43804/59347, in_queue=103151, util=91.58% 00:09:34.934 nvme0n2: ios=2068/2319, merge=0/0, ticks=35730/68253, in_queue=103983, util=86.59% 00:09:34.934 nvme0n3: ios=4267/4608, merge=0/0, ticks=52827/50479, in_queue=103306, util=100.00% 00:09:34.934 nvme0n4: ios=4125/4519, merge=0/0, ticks=22248/24638, in_queue=46886, util=95.25% 00:09:34.934 12:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:34.934 [global] 00:09:34.934 thread=1 00:09:34.934 invalidate=1 00:09:34.934 rw=randwrite 00:09:34.934 time_based=1 00:09:34.934 runtime=1 00:09:34.934 ioengine=libaio 00:09:34.934 direct=1 00:09:34.934 bs=4096 00:09:34.934 iodepth=128 00:09:34.934 norandommap=0 00:09:34.934 numjobs=1 00:09:34.934 00:09:34.934 verify_dump=1 00:09:34.934 verify_backlog=512 00:09:34.934 verify_state_save=0 00:09:34.934 do_verify=1 00:09:34.934 verify=crc32c-intel 00:09:34.934 [job0] 00:09:34.934 filename=/dev/nvme0n1 00:09:34.934 [job1] 00:09:34.934 filename=/dev/nvme0n2 00:09:34.934 [job2] 00:09:34.934 filename=/dev/nvme0n3 00:09:34.934 [job3] 00:09:34.934 filename=/dev/nvme0n4 00:09:34.934 Could not set queue depth (nvme0n1) 00:09:34.934 Could not set queue depth (nvme0n2) 00:09:34.934 Could not set queue depth (nvme0n3) 00:09:34.934 Could not set queue depth (nvme0n4) 00:09:34.934 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.934 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.934 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.934 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.934 fio-3.35 00:09:34.934 Starting 4 threads 00:09:36.311 00:09:36.311 job0: (groupid=0, jobs=1): err= 0: pid=951478: Fri Nov 15 12:31:16 2024 00:09:36.311 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:09:36.311 slat (nsec): min=1931, max=17742k, avg=96301.30, stdev=595253.75 00:09:36.311 clat (usec): min=4242, max=29826, avg=11941.68, stdev=2900.03 00:09:36.311 lat (usec): min=4248, max=29889, avg=12037.98, stdev=2927.98 00:09:36.311 clat percentiles (usec): 00:09:36.311 | 1.00th=[ 6063], 5.00th=[ 7832], 10.00th=[ 8979], 20.00th=[10552], 00:09:36.311 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11863], 00:09:36.311 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14877], 95.00th=[16450], 00:09:36.311 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25035], 99.95th=[25035], 00:09:36.311 | 99.99th=[29754] 00:09:36.311 write: IOPS=5180, BW=20.2MiB/s (21.2MB/s)(20.4MiB/1006msec); 0 zone resets 00:09:36.311 slat (usec): min=2, max=23266, avg=80.93, stdev=630.32 00:09:36.311 clat (usec): min=702, max=60575, avg=12777.13, stdev=6341.96 00:09:36.311 lat (usec): min=706, max=60579, avg=12858.06, stdev=6391.03 00:09:36.311 clat percentiles (usec): 00:09:36.311 | 1.00th=[ 2868], 5.00th=[ 5800], 10.00th=[ 7046], 20.00th=[ 9896], 00:09:36.311 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:09:36.311 | 70.00th=[12518], 80.00th=[13173], 90.00th=[18744], 95.00th=[25560], 00:09:36.311 | 99.00th=[45876], 99.50th=[51643], 99.90th=[59507], 99.95th=[59507], 00:09:36.311 | 99.99th=[60556] 00:09:36.311 bw ( KiB/s): min=20176, max=20784, per=31.39%, avg=20480.00, stdev=429.92, samples=2 00:09:36.311 iops : min= 5044, max= 5196, avg=5120.00, stdev=107.48, samples=2 00:09:36.311 lat (usec) : 750=0.04% 00:09:36.311 lat (msec) : 2=0.16%, 4=0.64%, 10=17.61%, 20=75.45%, 50=5.74% 00:09:36.311 lat (msec) : 100=0.37% 00:09:36.311 cpu : usr=4.28%, sys=6.07%, ctx=671, majf=0, minf=1 00:09:36.311 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:36.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.311 issued rwts: total=5120,5212,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.311 job1: (groupid=0, jobs=1): err= 0: pid=951479: Fri Nov 15 12:31:16 2024 00:09:36.311 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:09:36.311 slat (usec): min=2, max=16745, avg=106.93, stdev=752.64 00:09:36.311 clat (usec): min=4661, max=52049, avg=13266.74, stdev=5024.86 00:09:36.311 lat (usec): min=4675, max=52067, avg=13373.68, stdev=5091.68 00:09:36.311 clat percentiles (usec): 00:09:36.311 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[10159], 00:09:36.311 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11338], 60.00th=[13304], 00:09:36.311 | 70.00th=[14615], 80.00th=[16319], 90.00th=[18482], 95.00th=[21103], 00:09:36.311 | 99.00th=[36963], 99.50th=[42730], 99.90th=[52167], 99.95th=[52167], 00:09:36.311 | 99.99th=[52167] 00:09:36.311 write: IOPS=4646, BW=18.2MiB/s (19.0MB/s)(18.3MiB/1008msec); 0 zone resets 00:09:36.311 slat (usec): min=3, max=10037, avg=99.35, stdev=495.84 00:09:36.311 clat (usec): min=1477, max=58556, avg=14240.32, stdev=9849.89 00:09:36.311 lat (usec): min=1489, max=58563, avg=14339.68, stdev=9910.08 00:09:36.311 clat percentiles (usec): 00:09:36.311 | 1.00th=[ 4146], 5.00th=[ 6456], 10.00th=[ 7832], 20.00th=[ 9241], 00:09:36.311 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:09:36.311 | 70.00th=[11469], 80.00th=[14877], 90.00th=[28967], 95.00th=[41157], 00:09:36.311 | 99.00th=[51119], 99.50th=[53216], 99.90th=[58459], 99.95th=[58459], 00:09:36.311 | 99.99th=[58459] 00:09:36.311 bw ( KiB/s): min=18032, max=18832, per=28.26%, avg=18432.00, stdev=565.69, samples=2 00:09:36.311 iops : min= 4508, max= 4708, avg=4608.00, stdev=141.42, samples=2 00:09:36.311 lat (msec) : 2=0.27%, 4=0.22%, 10=18.20%, 20=71.30%, 50=9.35% 00:09:36.311 lat (msec) : 100=0.67% 00:09:36.311 cpu : usr=4.37%, sys=9.24%, ctx=585, majf=0, minf=1 00:09:36.311 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:36.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.311 issued rwts: total=4608,4684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.311 job2: (groupid=0, jobs=1): err= 0: pid=951480: Fri Nov 15 12:31:16 2024 00:09:36.311 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:09:36.311 slat (usec): min=2, max=15996, avg=130.79, stdev=904.60 00:09:36.311 clat (usec): min=4316, max=45060, avg=16736.78, stdev=6969.92 00:09:36.311 lat (usec): min=4325, max=45069, avg=16867.57, stdev=7016.39 00:09:36.311 clat percentiles (usec): 00:09:36.311 | 1.00th=[ 5342], 5.00th=[ 7242], 10.00th=[10945], 20.00th=[12387], 00:09:36.311 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13960], 60.00th=[16909], 00:09:36.311 | 70.00th=[17957], 80.00th=[20317], 90.00th=[28443], 95.00th=[32637], 00:09:36.311 | 99.00th=[38011], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:09:36.311 | 99.99th=[44827] 00:09:36.311 write: IOPS=3266, BW=12.8MiB/s (13.4MB/s)(12.9MiB/1011msec); 0 zone resets 00:09:36.311 slat (usec): min=2, max=20282, avg=165.68, stdev=928.07 00:09:36.311 clat (usec): min=867, max=60227, avg=23288.64, stdev=11578.97 00:09:36.311 lat (usec): min=874, max=60234, avg=23454.32, stdev=11682.66 00:09:36.311 clat percentiles (usec): 00:09:36.311 | 1.00th=[ 4113], 5.00th=[ 8979], 10.00th=[11863], 20.00th=[12387], 00:09:36.311 | 30.00th=[16057], 40.00th=[18744], 50.00th=[22938], 60.00th=[24511], 00:09:36.311 | 70.00th=[25560], 80.00th=[28967], 90.00th=[41157], 95.00th=[47973], 00:09:36.311 | 99.00th=[56886], 99.50th=[59507], 99.90th=[60031], 99.95th=[60031], 00:09:36.311 | 99.99th=[60031] 00:09:36.311 bw ( KiB/s): min=12528, max=12872, per=19.47%, avg=12700.00, stdev=243.24, samples=2 00:09:36.311 iops : min= 3132, max= 3218, avg=3175.00, stdev=60.81, samples=2 00:09:36.311 lat (usec) : 1000=0.06% 00:09:36.311 lat (msec) : 4=0.41%, 10=6.81%, 20=52.42%, 50=38.00%, 100=2.31% 00:09:36.311 cpu : usr=2.18%, sys=3.76%, ctx=349, majf=0, minf=1 00:09:36.311 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:36.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.311 issued rwts: total=3072,3302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.311 job3: (groupid=0, jobs=1): err= 0: pid=951481: Fri Nov 15 12:31:16 2024 00:09:36.311 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:09:36.311 slat (usec): min=2, max=24744, avg=128.11, stdev=877.66 00:09:36.311 clat (usec): min=10145, max=45736, avg=16687.12, stdev=6359.11 00:09:36.311 lat (usec): min=10223, max=62830, avg=16815.23, stdev=6414.98 00:09:36.311 clat percentiles (usec): 00:09:36.311 | 1.00th=[10683], 5.00th=[12518], 10.00th=[13042], 20.00th=[13829], 00:09:36.311 | 30.00th=[14353], 40.00th=[14746], 50.00th=[14877], 60.00th=[15139], 00:09:36.311 | 70.00th=[15533], 80.00th=[16712], 90.00th=[20317], 95.00th=[32637], 00:09:36.311 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:09:36.311 | 99.99th=[45876] 00:09:36.311 write: IOPS=3280, BW=12.8MiB/s (13.4MB/s)(12.9MiB/1003msec); 0 zone resets 00:09:36.311 slat (usec): min=3, max=17290, avg=177.89, stdev=998.61 00:09:36.311 clat (usec): min=1756, max=75274, avg=22871.60, stdev=14562.29 00:09:36.311 lat (usec): min=7825, max=75280, avg=23049.49, stdev=14638.18 00:09:36.311 clat percentiles (usec): 00:09:36.311 | 1.00th=[ 8979], 5.00th=[11600], 10.00th=[12518], 20.00th=[12780], 00:09:36.311 | 30.00th=[13304], 40.00th=[14222], 50.00th=[15270], 60.00th=[22676], 00:09:36.311 | 70.00th=[24249], 80.00th=[26870], 90.00th=[43254], 95.00th=[61604], 00:09:36.311 | 99.00th=[74974], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:09:36.311 | 99.99th=[74974] 00:09:36.311 bw ( KiB/s): min=10272, max=15024, per=19.39%, avg=12648.00, stdev=3360.17, samples=2 00:09:36.311 iops : min= 2568, max= 3756, avg=3162.00, stdev=840.04, samples=2 00:09:36.311 lat (msec) : 2=0.02%, 10=1.01%, 20=70.10%, 50=24.71%, 100=4.17% 00:09:36.311 cpu : usr=2.00%, sys=4.29%, ctx=338, majf=0, minf=1 00:09:36.311 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:36.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.311 issued rwts: total=3072,3290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.311 00:09:36.311 Run status group 0 (all jobs): 00:09:36.311 READ: bw=61.3MiB/s (64.3MB/s), 11.9MiB/s-19.9MiB/s (12.4MB/s-20.8MB/s), io=62.0MiB (65.0MB), run=1003-1011msec 00:09:36.311 WRITE: bw=63.7MiB/s (66.8MB/s), 12.8MiB/s-20.2MiB/s (13.4MB/s-21.2MB/s), io=64.4MiB (67.5MB), run=1003-1011msec 00:09:36.311 00:09:36.311 Disk stats (read/write): 00:09:36.311 nvme0n1: ios=4109/4607, merge=0/0, ticks=31559/40643, in_queue=72202, util=85.27% 00:09:36.311 nvme0n2: ios=4011/4096, merge=0/0, ticks=50225/53072, in_queue=103297, util=86.38% 00:09:36.311 nvme0n3: ios=2300/2560, merge=0/0, ticks=32680/59822, in_queue=92502, util=88.91% 00:09:36.311 nvme0n4: ios=2579/2798, merge=0/0, ticks=17170/27300, in_queue=44470, util=95.89% 00:09:36.311 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:36.311 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=951617 00:09:36.311 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:36.311 12:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:36.311 [global] 00:09:36.311 thread=1 00:09:36.311 invalidate=1 00:09:36.311 rw=read 00:09:36.311 time_based=1 00:09:36.311 runtime=10 00:09:36.311 ioengine=libaio 00:09:36.311 direct=1 00:09:36.311 bs=4096 00:09:36.311 iodepth=1 00:09:36.311 norandommap=1 00:09:36.311 numjobs=1 00:09:36.311 00:09:36.311 [job0] 00:09:36.311 filename=/dev/nvme0n1 00:09:36.311 [job1] 00:09:36.312 filename=/dev/nvme0n2 00:09:36.312 [job2] 00:09:36.312 filename=/dev/nvme0n3 00:09:36.312 [job3] 00:09:36.312 filename=/dev/nvme0n4 00:09:36.312 Could not set queue depth (nvme0n1) 00:09:36.312 Could not set queue depth (nvme0n2) 00:09:36.312 Could not set queue depth (nvme0n3) 00:09:36.312 Could not set queue depth (nvme0n4) 00:09:36.312 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.312 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.312 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.312 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.312 fio-3.35 00:09:36.312 Starting 4 threads 00:09:39.593 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:39.593 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:39.593 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1777664, buflen=4096 00:09:39.593 fio: pid=951714, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:39.850 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.850 12:31:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:39.850 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=56459264, buflen=4096 00:09:39.850 fio: pid=951713, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:40.107 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=32800768, buflen=4096 00:09:40.107 fio: pid=951711, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:40.108 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.108 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:40.365 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=41603072, buflen=4096 00:09:40.365 fio: pid=951712, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:40.365 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.365 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:40.365 00:09:40.365 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=951711: Fri Nov 15 12:31:20 2024 00:09:40.365 read: IOPS=2290, BW=9162KiB/s (9382kB/s)(31.3MiB/3496msec) 00:09:40.365 slat (usec): min=3, max=11664, avg=17.65, stdev=233.91 00:09:40.365 clat (usec): min=164, max=42078, avg=412.78, stdev=2342.14 00:09:40.365 lat (usec): min=169, max=42091, avg=430.42, stdev=2353.80 00:09:40.365 clat percentiles (usec): 00:09:40.365 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 200], 20.00th=[ 225], 00:09:40.365 | 30.00th=[ 241], 40.00th=[ 258], 50.00th=[ 273], 60.00th=[ 285], 00:09:40.365 | 70.00th=[ 306], 80.00th=[ 338], 90.00th=[ 367], 95.00th=[ 396], 00:09:40.365 | 99.00th=[ 474], 99.50th=[ 529], 99.90th=[41681], 99.95th=[41681], 00:09:40.365 | 99.99th=[42206] 00:09:40.365 bw ( KiB/s): min= 184, max=14936, per=26.55%, avg=9098.67, stdev=6059.65, samples=6 00:09:40.365 iops : min= 46, max= 3734, avg=2274.67, stdev=1514.91, samples=6 00:09:40.365 lat (usec) : 250=35.57%, 500=63.77%, 750=0.31%, 1000=0.01% 00:09:40.365 lat (msec) : 50=0.32% 00:09:40.365 cpu : usr=1.49%, sys=3.95%, ctx=8013, majf=0, minf=1 00:09:40.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.365 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.365 issued rwts: total=8009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.365 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=951712: Fri Nov 15 12:31:20 2024 00:09:40.365 read: IOPS=2687, BW=10.5MiB/s (11.0MB/s)(39.7MiB/3780msec) 00:09:40.365 slat (usec): min=4, max=28188, avg=18.63, stdev=412.02 00:09:40.365 clat (usec): min=171, max=42136, avg=350.06, stdev=2091.24 00:09:40.365 lat (usec): min=177, max=70324, avg=368.69, stdev=2185.82 00:09:40.365 clat percentiles (usec): 00:09:40.365 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:09:40.365 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 227], 00:09:40.365 | 70.00th=[ 239], 80.00th=[ 285], 90.00th=[ 343], 95.00th=[ 371], 00:09:40.365 | 99.00th=[ 449], 99.50th=[ 523], 99.90th=[41681], 99.95th=[42206], 00:09:40.365 | 99.99th=[42206] 00:09:40.365 bw ( KiB/s): min= 104, max=17720, per=30.49%, avg=10447.86, stdev=6684.48, samples=7 00:09:40.365 iops : min= 26, max= 4430, avg=2611.86, stdev=1671.13, samples=7 00:09:40.365 lat (usec) : 250=74.89%, 500=24.54%, 750=0.24%, 1000=0.02% 00:09:40.365 lat (msec) : 2=0.04%, 10=0.01%, 50=0.26% 00:09:40.365 cpu : usr=1.51%, sys=3.28%, ctx=10163, majf=0, minf=2 00:09:40.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.365 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.365 issued rwts: total=10158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.365 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=951713: Fri Nov 15 12:31:20 2024 00:09:40.365 read: IOPS=4309, BW=16.8MiB/s (17.6MB/s)(53.8MiB/3199msec) 00:09:40.365 slat (usec): min=3, max=11161, avg=11.08, stdev=115.70 00:09:40.365 clat (usec): min=161, max=4212, avg=217.07, stdev=70.61 00:09:40.365 lat (usec): min=166, max=11429, avg=228.15, stdev=137.62 00:09:40.365 clat percentiles (usec): 00:09:40.365 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:09:40.365 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:09:40.365 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 273], 95.00th=[ 334], 00:09:40.365 | 99.00th=[ 478], 99.50th=[ 490], 99.90th=[ 506], 99.95th=[ 586], 00:09:40.365 | 99.99th=[ 3032] 00:09:40.365 bw ( KiB/s): min=15920, max=18456, per=50.31%, avg=17240.00, stdev=894.90, samples=6 00:09:40.365 iops : min= 3980, max= 4614, avg=4310.00, stdev=223.72, samples=6 00:09:40.365 lat (usec) : 250=87.38%, 500=12.48%, 750=0.09%, 1000=0.01% 00:09:40.365 lat (msec) : 4=0.01%, 10=0.01% 00:09:40.365 cpu : usr=1.97%, sys=4.60%, ctx=13788, majf=0, minf=1 00:09:40.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.365 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.365 issued rwts: total=13785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.365 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=951714: Fri Nov 15 12:31:20 2024 00:09:40.365 read: IOPS=147, BW=589KiB/s (603kB/s)(1736KiB/2946msec) 00:09:40.365 slat (nsec): min=7914, max=46644, avg=19666.56, stdev=5699.92 00:09:40.365 clat (usec): min=233, max=45018, avg=6709.80, stdev=14915.63 00:09:40.365 lat (usec): min=249, max=45036, avg=6729.47, stdev=14915.07 00:09:40.365 clat percentiles (usec): 00:09:40.365 | 1.00th=[ 247], 5.00th=[ 253], 10.00th=[ 262], 20.00th=[ 277], 00:09:40.365 | 30.00th=[ 302], 40.00th=[ 355], 50.00th=[ 375], 60.00th=[ 392], 00:09:40.365 | 70.00th=[ 408], 80.00th=[ 433], 90.00th=[41157], 95.00th=[42206], 00:09:40.365 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:09:40.365 | 99.99th=[44827] 00:09:40.365 bw ( KiB/s): min= 168, max= 1136, per=1.97%, avg=676.80, stdev=343.05, samples=5 00:09:40.365 iops : min= 42, max= 284, avg=169.20, stdev=85.76, samples=5 00:09:40.366 lat (usec) : 250=2.30%, 500=81.38%, 750=0.46%, 1000=0.23% 00:09:40.366 lat (msec) : 50=15.40% 00:09:40.366 cpu : usr=0.24%, sys=0.37%, ctx=435, majf=0, minf=1 00:09:40.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.366 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.366 issued rwts: total=435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.366 00:09:40.366 Run status group 0 (all jobs): 00:09:40.366 READ: bw=33.5MiB/s (35.1MB/s), 589KiB/s-16.8MiB/s (603kB/s-17.6MB/s), io=126MiB (133MB), run=2946-3780msec 00:09:40.366 00:09:40.366 Disk stats (read/write): 00:09:40.366 nvme0n1: ios=7527/0, merge=0/0, ticks=3150/0, in_queue=3150, util=95.45% 00:09:40.366 nvme0n2: ios=9392/0, merge=0/0, ticks=3301/0, in_queue=3301, util=94.45% 00:09:40.366 nvme0n3: ios=13430/0, merge=0/0, ticks=2811/0, in_queue=2811, util=96.23% 00:09:40.366 nvme0n4: ios=432/0, merge=0/0, ticks=2824/0, in_queue=2824, util=96.72% 00:09:40.624 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.624 12:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:40.882 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.882 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:41.140 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:41.140 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:41.399 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:41.399 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:41.657 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:41.657 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 951617 00:09:41.657 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:41.657 12:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:41.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.916 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:41.916 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:41.916 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:41.916 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.916 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:41.916 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.916 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:41.916 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:41.916 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:41.916 nvmf hotplug test: fio failed as expected 00:09:41.916 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.174 rmmod nvme_tcp 00:09:42.174 rmmod nvme_fabrics 00:09:42.174 rmmod nvme_keyring 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 949583 ']' 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 949583 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 949583 ']' 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 949583 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 949583 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 949583' 00:09:42.174 killing process with pid 949583 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 949583 00:09:42.174 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 949583 00:09:42.433 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.433 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.433 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.433 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:42.433 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:42.433 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.433 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.433 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.433 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:42.433 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.433 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.433 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.973 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.973 00:09:44.973 real 0m24.247s 00:09:44.973 user 1m24.253s 00:09:44.973 sys 0m7.461s 00:09:44.973 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.973 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.973 ************************************ 00:09:44.973 END TEST nvmf_fio_target 00:09:44.973 ************************************ 00:09:44.973 12:31:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:44.973 12:31:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.973 12:31:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.973 12:31:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.973 ************************************ 00:09:44.973 START TEST nvmf_bdevio 00:09:44.973 ************************************ 00:09:44.973 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:44.973 * Looking for test storage... 00:09:44.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.973 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.973 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.973 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.973 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.974 --rc genhtml_branch_coverage=1 00:09:44.974 --rc genhtml_function_coverage=1 00:09:44.974 --rc genhtml_legend=1 00:09:44.974 --rc geninfo_all_blocks=1 00:09:44.974 --rc geninfo_unexecuted_blocks=1 00:09:44.974 00:09:44.974 ' 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.974 --rc genhtml_branch_coverage=1 00:09:44.974 --rc genhtml_function_coverage=1 00:09:44.974 --rc genhtml_legend=1 00:09:44.974 --rc geninfo_all_blocks=1 00:09:44.974 --rc geninfo_unexecuted_blocks=1 00:09:44.974 00:09:44.974 ' 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.974 --rc genhtml_branch_coverage=1 00:09:44.974 --rc genhtml_function_coverage=1 00:09:44.974 --rc genhtml_legend=1 00:09:44.974 --rc geninfo_all_blocks=1 00:09:44.974 --rc geninfo_unexecuted_blocks=1 00:09:44.974 00:09:44.974 ' 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.974 --rc genhtml_branch_coverage=1 00:09:44.974 --rc genhtml_function_coverage=1 00:09:44.974 --rc genhtml_legend=1 00:09:44.974 --rc geninfo_all_blocks=1 00:09:44.974 --rc geninfo_unexecuted_blocks=1 00:09:44.974 00:09:44.974 ' 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.974 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.974 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.974 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:44.974 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:44.974 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:44.974 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.974 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:44.974 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:44.974 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:44.974 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.975 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.975 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.975 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:44.975 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:44.975 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:44.975 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:46.879 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:46.879 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:46.879 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:46.879 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.879 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.138 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.138 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.138 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:47.138 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.138 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.138 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.138 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:47.138 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:47.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:09:47.138 00:09:47.138 --- 10.0.0.2 ping statistics --- 00:09:47.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.138 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:09:47.138 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:09:47.138 00:09:47.138 --- 10.0.0.1 ping statistics --- 00:09:47.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.138 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:09:47.138 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=954407 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 954407 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 954407 ']' 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.139 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.139 [2024-11-15 12:31:27.386457] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:09:47.139 [2024-11-15 12:31:27.386545] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.139 [2024-11-15 12:31:27.456799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.397 [2024-11-15 12:31:27.513185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.397 [2024-11-15 12:31:27.513237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.397 [2024-11-15 12:31:27.513264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.397 [2024-11-15 12:31:27.513275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.397 [2024-11-15 12:31:27.513283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.397 [2024-11-15 12:31:27.514880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:47.397 [2024-11-15 12:31:27.514943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:47.397 [2024-11-15 12:31:27.515010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:47.397 [2024-11-15 12:31:27.515014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 [2024-11-15 12:31:27.666656] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 Malloc0 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 [2024-11-15 12:31:27.731574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:47.397 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:47.397 { 00:09:47.397 "params": { 00:09:47.397 "name": "Nvme$subsystem", 00:09:47.397 "trtype": "$TEST_TRANSPORT", 00:09:47.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.397 "adrfam": "ipv4", 00:09:47.397 "trsvcid": "$NVMF_PORT", 00:09:47.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.397 "hdgst": ${hdgst:-false}, 00:09:47.397 "ddgst": ${ddgst:-false} 00:09:47.397 }, 00:09:47.397 "method": "bdev_nvme_attach_controller" 00:09:47.397 } 00:09:47.397 EOF 00:09:47.398 )") 00:09:47.398 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:47.656 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:47.656 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:47.656 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:47.656 "params": { 00:09:47.656 "name": "Nvme1", 00:09:47.656 "trtype": "tcp", 00:09:47.656 "traddr": "10.0.0.2", 00:09:47.656 "adrfam": "ipv4", 00:09:47.656 "trsvcid": "4420", 00:09:47.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.656 "hdgst": false, 00:09:47.656 "ddgst": false 00:09:47.656 }, 00:09:47.656 "method": "bdev_nvme_attach_controller" 00:09:47.656 }' 00:09:47.656 [2024-11-15 12:31:27.781821] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:09:47.656 [2024-11-15 12:31:27.781901] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954504 ] 00:09:47.656 [2024-11-15 12:31:27.851471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.656 [2024-11-15 12:31:27.916642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.656 [2024-11-15 12:31:27.916697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.656 [2024-11-15 12:31:27.916700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.308 I/O targets: 00:09:48.308 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:48.308 00:09:48.308 00:09:48.308 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.308 http://cunit.sourceforge.net/ 00:09:48.308 00:09:48.308 00:09:48.308 Suite: bdevio tests on: Nvme1n1 00:09:48.308 Test: blockdev write read block ...passed 00:09:48.309 Test: blockdev write zeroes read block ...passed 00:09:48.309 Test: blockdev write zeroes read no split ...passed 00:09:48.309 Test: blockdev write zeroes read split ...passed 00:09:48.309 Test: blockdev write zeroes read split partial ...passed 00:09:48.309 Test: blockdev reset ...[2024-11-15 12:31:28.417140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:48.309 [2024-11-15 12:31:28.417246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2501640 (9): Bad file descriptor 00:09:48.309 [2024-11-15 12:31:28.433703] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:48.309 passed 00:09:48.309 Test: blockdev write read 8 blocks ...passed 00:09:48.309 Test: blockdev write read size > 128k ...passed 00:09:48.309 Test: blockdev write read invalid size ...passed 00:09:48.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:48.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:48.309 Test: blockdev write read max offset ...passed 00:09:48.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:48.309 Test: blockdev writev readv 8 blocks ...passed 00:09:48.309 Test: blockdev writev readv 30 x 1block ...passed 00:09:48.617 Test: blockdev writev readv block ...passed 00:09:48.617 Test: blockdev writev readv size > 128k ...passed 00:09:48.617 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:48.617 Test: blockdev comparev and writev ...[2024-11-15 12:31:28.643860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:48.617 [2024-11-15 12:31:28.643897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:48.617 [2024-11-15 12:31:28.643922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:48.617 [2024-11-15 12:31:28.643940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:48.617 [2024-11-15 12:31:28.644283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:48.617 [2024-11-15 12:31:28.644308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:48.617 [2024-11-15 12:31:28.644330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:48.617 [2024-11-15 12:31:28.644347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:48.617 [2024-11-15 12:31:28.644679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:48.617 [2024-11-15 12:31:28.644702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:48.617 [2024-11-15 12:31:28.644731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:48.617 [2024-11-15 12:31:28.644749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:48.617 [2024-11-15 12:31:28.645104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:48.617 [2024-11-15 12:31:28.645128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:48.617 [2024-11-15 12:31:28.645149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:48.617 [2024-11-15 12:31:28.645165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:48.617 passed 00:09:48.617 Test: blockdev nvme passthru rw ...passed 00:09:48.617 Test: blockdev nvme passthru vendor specific ...[2024-11-15 12:31:28.727004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:48.617 [2024-11-15 12:31:28.727042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:48.617 [2024-11-15 12:31:28.727184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:48.617 [2024-11-15 12:31:28.727207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:48.617 [2024-11-15 12:31:28.727348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:48.617 [2024-11-15 12:31:28.727371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:48.617 [2024-11-15 12:31:28.727503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:48.617 [2024-11-15 12:31:28.727526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:48.617 passed 00:09:48.617 Test: blockdev nvme admin passthru ...passed 00:09:48.617 Test: blockdev copy ...passed 00:09:48.617 00:09:48.617 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.617 suites 1 1 n/a 0 0 00:09:48.617 tests 23 23 23 0 0 00:09:48.617 asserts 152 152 152 0 n/a 00:09:48.617 00:09:48.617 Elapsed time = 0.946 seconds 00:09:48.875 12:31:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.875 12:31:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.875 12:31:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:48.875 12:31:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.875 12:31:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:48.875 12:31:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:48.875 12:31:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:48.875 12:31:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:48.875 12:31:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:48.875 12:31:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:48.875 12:31:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:48.875 12:31:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:48.875 rmmod nvme_tcp 00:09:48.875 rmmod nvme_fabrics 00:09:48.875 rmmod nvme_keyring 00:09:48.875 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:48.875 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:48.875 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:48.875 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 954407 ']' 00:09:48.875 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 954407 00:09:48.875 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 954407 ']' 00:09:48.875 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 954407 00:09:48.875 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:48.875 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.875 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 954407 00:09:48.875 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:48.875 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:48.875 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 954407' 00:09:48.875 killing process with pid 954407 00:09:48.875 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 954407 00:09:48.875 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 954407 00:09:49.133 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:49.133 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:49.133 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:49.133 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:49.133 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:49.133 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:49.133 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:49.133 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:49.133 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:49.133 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.133 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.133 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:51.671 00:09:51.671 real 0m6.567s 00:09:51.671 user 0m10.564s 00:09:51.671 sys 0m2.253s 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:51.671 ************************************ 00:09:51.671 END TEST nvmf_bdevio 00:09:51.671 ************************************ 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:51.671 00:09:51.671 real 3m57.792s 00:09:51.671 user 10m18.405s 00:09:51.671 sys 1m8.671s 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:51.671 ************************************ 00:09:51.671 END TEST nvmf_target_core 00:09:51.671 ************************************ 00:09:51.671 12:31:31 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:51.671 12:31:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:51.671 12:31:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.671 12:31:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:51.671 ************************************ 00:09:51.671 START TEST nvmf_target_extra 00:09:51.671 ************************************ 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:51.671 * Looking for test storage... 00:09:51.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:51.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.671 --rc genhtml_branch_coverage=1 00:09:51.671 --rc genhtml_function_coverage=1 00:09:51.671 --rc genhtml_legend=1 00:09:51.671 --rc geninfo_all_blocks=1 00:09:51.671 --rc geninfo_unexecuted_blocks=1 00:09:51.671 00:09:51.671 ' 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:51.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.671 --rc genhtml_branch_coverage=1 00:09:51.671 --rc genhtml_function_coverage=1 00:09:51.671 --rc genhtml_legend=1 00:09:51.671 --rc geninfo_all_blocks=1 00:09:51.671 --rc geninfo_unexecuted_blocks=1 00:09:51.671 00:09:51.671 ' 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:51.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.671 --rc genhtml_branch_coverage=1 00:09:51.671 --rc genhtml_function_coverage=1 00:09:51.671 --rc genhtml_legend=1 00:09:51.671 --rc geninfo_all_blocks=1 00:09:51.671 --rc geninfo_unexecuted_blocks=1 00:09:51.671 00:09:51.671 ' 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:51.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.671 --rc genhtml_branch_coverage=1 00:09:51.671 --rc genhtml_function_coverage=1 00:09:51.671 --rc genhtml_legend=1 00:09:51.671 --rc geninfo_all_blocks=1 00:09:51.671 --rc geninfo_unexecuted_blocks=1 00:09:51.671 00:09:51.671 ' 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.671 12:31:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:51.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:51.672 ************************************ 00:09:51.672 START TEST nvmf_example 00:09:51.672 ************************************ 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:51.672 * Looking for test storage... 00:09:51.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:51.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.672 --rc genhtml_branch_coverage=1 00:09:51.672 --rc genhtml_function_coverage=1 00:09:51.672 --rc genhtml_legend=1 00:09:51.672 --rc geninfo_all_blocks=1 00:09:51.672 --rc geninfo_unexecuted_blocks=1 00:09:51.672 00:09:51.672 ' 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:51.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.672 --rc genhtml_branch_coverage=1 00:09:51.672 --rc genhtml_function_coverage=1 00:09:51.672 --rc genhtml_legend=1 00:09:51.672 --rc geninfo_all_blocks=1 00:09:51.672 --rc geninfo_unexecuted_blocks=1 00:09:51.672 00:09:51.672 ' 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:51.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.672 --rc genhtml_branch_coverage=1 00:09:51.672 --rc genhtml_function_coverage=1 00:09:51.672 --rc genhtml_legend=1 00:09:51.672 --rc geninfo_all_blocks=1 00:09:51.672 --rc geninfo_unexecuted_blocks=1 00:09:51.672 00:09:51.672 ' 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:51.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.672 --rc genhtml_branch_coverage=1 00:09:51.672 --rc genhtml_function_coverage=1 00:09:51.672 --rc genhtml_legend=1 00:09:51.672 --rc geninfo_all_blocks=1 00:09:51.672 --rc geninfo_unexecuted_blocks=1 00:09:51.672 00:09:51.672 ' 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.672 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:51.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:51.673 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.208 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.208 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:54.208 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:54.208 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:54.208 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:54.208 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:54.208 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:54.208 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:54.208 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:54.208 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:54.208 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:54.209 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:54.209 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:54.209 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:54.209 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:54.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:09:54.209 00:09:54.209 --- 10.0.0.2 ping statistics --- 00:09:54.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.209 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:09:54.209 00:09:54.209 --- 10.0.0.1 ping statistics --- 00:09:54.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.209 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.209 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:54.210 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:54.210 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=956762 00:09:54.210 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:54.210 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:54.210 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 956762 00:09:54.210 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 956762 ']' 00:09:54.210 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.210 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.210 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.210 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.210 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:55.144 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:07.346 Initializing NVMe Controllers 00:10:07.346 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:07.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:07.346 Initialization complete. Launching workers. 00:10:07.346 ======================================================== 00:10:07.346 Latency(us) 00:10:07.346 Device Information : IOPS MiB/s Average min max 00:10:07.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14378.99 56.17 4453.63 886.70 15272.49 00:10:07.346 ======================================================== 00:10:07.346 Total : 14378.99 56.17 4453.63 886.70 15272.49 00:10:07.346 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.346 rmmod nvme_tcp 00:10:07.346 rmmod nvme_fabrics 00:10:07.346 rmmod nvme_keyring 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 956762 ']' 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 956762 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 956762 ']' 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 956762 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 956762 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 956762' 00:10:07.346 killing process with pid 956762 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 956762 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 956762 00:10:07.346 nvmf threads initialize successfully 00:10:07.346 bdev subsystem init successfully 00:10:07.346 created a nvmf target service 00:10:07.346 create targets's poll groups done 00:10:07.346 all subsystems of target started 00:10:07.346 nvmf target is running 00:10:07.346 all subsystems of target stopped 00:10:07.346 destroy targets's poll groups done 00:10:07.346 destroyed the nvmf target service 00:10:07.346 bdev subsystem finish successfully 00:10:07.346 nvmf threads destroy successfully 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.346 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.915 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:07.915 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:07.915 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.915 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.915 00:10:07.915 real 0m16.325s 00:10:07.915 user 0m45.859s 00:10:07.915 sys 0m3.448s 00:10:07.915 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.915 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.915 ************************************ 00:10:07.915 END TEST nvmf_example 00:10:07.915 ************************************ 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:07.915 ************************************ 00:10:07.915 START TEST nvmf_filesystem 00:10:07.915 ************************************ 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:07.915 * Looking for test storage... 00:10:07.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.915 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:07.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.915 --rc genhtml_branch_coverage=1 00:10:07.916 --rc genhtml_function_coverage=1 00:10:07.916 --rc genhtml_legend=1 00:10:07.916 --rc geninfo_all_blocks=1 00:10:07.916 --rc geninfo_unexecuted_blocks=1 00:10:07.916 00:10:07.916 ' 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:07.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.916 --rc genhtml_branch_coverage=1 00:10:07.916 --rc genhtml_function_coverage=1 00:10:07.916 --rc genhtml_legend=1 00:10:07.916 --rc geninfo_all_blocks=1 00:10:07.916 --rc geninfo_unexecuted_blocks=1 00:10:07.916 00:10:07.916 ' 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:07.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.916 --rc genhtml_branch_coverage=1 00:10:07.916 --rc genhtml_function_coverage=1 00:10:07.916 --rc genhtml_legend=1 00:10:07.916 --rc geninfo_all_blocks=1 00:10:07.916 --rc geninfo_unexecuted_blocks=1 00:10:07.916 00:10:07.916 ' 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:07.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.916 --rc genhtml_branch_coverage=1 00:10:07.916 --rc genhtml_function_coverage=1 00:10:07.916 --rc genhtml_legend=1 00:10:07.916 --rc geninfo_all_blocks=1 00:10:07.916 --rc geninfo_unexecuted_blocks=1 00:10:07.916 00:10:07.916 ' 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:07.916 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:07.917 #define SPDK_CONFIG_H 00:10:07.917 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:07.917 #define SPDK_CONFIG_APPS 1 00:10:07.917 #define SPDK_CONFIG_ARCH native 00:10:07.917 #undef SPDK_CONFIG_ASAN 00:10:07.917 #undef SPDK_CONFIG_AVAHI 00:10:07.917 #undef SPDK_CONFIG_CET 00:10:07.917 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:07.917 #define SPDK_CONFIG_COVERAGE 1 00:10:07.917 #define SPDK_CONFIG_CROSS_PREFIX 00:10:07.917 #undef SPDK_CONFIG_CRYPTO 00:10:07.917 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:07.917 #undef SPDK_CONFIG_CUSTOMOCF 00:10:07.917 #undef SPDK_CONFIG_DAOS 00:10:07.917 #define SPDK_CONFIG_DAOS_DIR 00:10:07.917 #define SPDK_CONFIG_DEBUG 1 00:10:07.917 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:07.917 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:07.917 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:07.917 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:07.917 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:07.917 #undef SPDK_CONFIG_DPDK_UADK 00:10:07.917 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:07.917 #define SPDK_CONFIG_EXAMPLES 1 00:10:07.917 #undef SPDK_CONFIG_FC 00:10:07.917 #define SPDK_CONFIG_FC_PATH 00:10:07.917 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:07.917 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:07.917 #define SPDK_CONFIG_FSDEV 1 00:10:07.917 #undef SPDK_CONFIG_FUSE 00:10:07.917 #undef SPDK_CONFIG_FUZZER 00:10:07.917 #define SPDK_CONFIG_FUZZER_LIB 00:10:07.917 #undef SPDK_CONFIG_GOLANG 00:10:07.917 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:07.917 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:07.917 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:07.917 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:07.917 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:07.917 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:07.917 #undef SPDK_CONFIG_HAVE_LZ4 00:10:07.917 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:07.917 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:07.917 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:07.917 #define SPDK_CONFIG_IDXD 1 00:10:07.917 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:07.917 #undef SPDK_CONFIG_IPSEC_MB 00:10:07.917 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:07.917 #define SPDK_CONFIG_ISAL 1 00:10:07.917 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:07.917 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:07.917 #define SPDK_CONFIG_LIBDIR 00:10:07.917 #undef SPDK_CONFIG_LTO 00:10:07.917 #define SPDK_CONFIG_MAX_LCORES 128 00:10:07.917 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:07.917 #define SPDK_CONFIG_NVME_CUSE 1 00:10:07.917 #undef SPDK_CONFIG_OCF 00:10:07.917 #define SPDK_CONFIG_OCF_PATH 00:10:07.917 #define SPDK_CONFIG_OPENSSL_PATH 00:10:07.917 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:07.917 #define SPDK_CONFIG_PGO_DIR 00:10:07.917 #undef SPDK_CONFIG_PGO_USE 00:10:07.917 #define SPDK_CONFIG_PREFIX /usr/local 00:10:07.917 #undef SPDK_CONFIG_RAID5F 00:10:07.917 #undef SPDK_CONFIG_RBD 00:10:07.917 #define SPDK_CONFIG_RDMA 1 00:10:07.917 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:07.917 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:07.917 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:07.917 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:07.917 #define SPDK_CONFIG_SHARED 1 00:10:07.917 #undef SPDK_CONFIG_SMA 00:10:07.917 #define SPDK_CONFIG_TESTS 1 00:10:07.917 #undef SPDK_CONFIG_TSAN 00:10:07.917 #define SPDK_CONFIG_UBLK 1 00:10:07.917 #define SPDK_CONFIG_UBSAN 1 00:10:07.917 #undef SPDK_CONFIG_UNIT_TESTS 00:10:07.917 #undef SPDK_CONFIG_URING 00:10:07.917 #define SPDK_CONFIG_URING_PATH 00:10:07.917 #undef SPDK_CONFIG_URING_ZNS 00:10:07.917 #undef SPDK_CONFIG_USDT 00:10:07.917 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:07.917 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:07.917 #define SPDK_CONFIG_VFIO_USER 1 00:10:07.917 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:07.917 #define SPDK_CONFIG_VHOST 1 00:10:07.917 #define SPDK_CONFIG_VIRTIO 1 00:10:07.917 #undef SPDK_CONFIG_VTUNE 00:10:07.917 #define SPDK_CONFIG_VTUNE_DIR 00:10:07.917 #define SPDK_CONFIG_WERROR 1 00:10:07.917 #define SPDK_CONFIG_WPDK_DIR 00:10:07.917 #undef SPDK_CONFIG_XNVME 00:10:07.917 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:07.917 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:07.918 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:08.179 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:08.179 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:08.179 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:08.179 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:08.179 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:08.179 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:08.179 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:08.179 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:08.179 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:08.179 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:08.179 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:08.179 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:08.179 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:08.179 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:08.179 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:08.180 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 958484 ]] 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 958484 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.XAgMA7 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.XAgMA7/tests/target /tmp/spdk.XAgMA7 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=56029868032 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988536320 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5958668288 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984237056 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994268160 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375277568 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22429696 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993911808 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994268160 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=356352 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198841344 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198853632 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:08.181 * Looking for test storage... 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=56029868032 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:08.181 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8173260800 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.182 --rc genhtml_branch_coverage=1 00:10:08.182 --rc genhtml_function_coverage=1 00:10:08.182 --rc genhtml_legend=1 00:10:08.182 --rc geninfo_all_blocks=1 00:10:08.182 --rc geninfo_unexecuted_blocks=1 00:10:08.182 00:10:08.182 ' 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.182 --rc genhtml_branch_coverage=1 00:10:08.182 --rc genhtml_function_coverage=1 00:10:08.182 --rc genhtml_legend=1 00:10:08.182 --rc geninfo_all_blocks=1 00:10:08.182 --rc geninfo_unexecuted_blocks=1 00:10:08.182 00:10:08.182 ' 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.182 --rc genhtml_branch_coverage=1 00:10:08.182 --rc genhtml_function_coverage=1 00:10:08.182 --rc genhtml_legend=1 00:10:08.182 --rc geninfo_all_blocks=1 00:10:08.182 --rc geninfo_unexecuted_blocks=1 00:10:08.182 00:10:08.182 ' 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.182 --rc genhtml_branch_coverage=1 00:10:08.182 --rc genhtml_function_coverage=1 00:10:08.182 --rc genhtml_legend=1 00:10:08.182 --rc geninfo_all_blocks=1 00:10:08.182 --rc geninfo_unexecuted_blocks=1 00:10:08.182 00:10:08.182 ' 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.182 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:08.183 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:10.715 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:10.715 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.715 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:10.716 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:10.716 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:10.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:10:10.716 00:10:10.716 --- 10.0.0.2 ping statistics --- 00:10:10.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.716 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:10.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:10:10.716 00:10:10.716 --- 10.0.0.1 ping statistics --- 00:10:10.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.716 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:10.716 ************************************ 00:10:10.716 START TEST nvmf_filesystem_no_in_capsule 00:10:10.716 ************************************ 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=960126 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 960126 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 960126 ']' 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.716 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.716 [2024-11-15 12:31:50.825371] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:10:10.716 [2024-11-15 12:31:50.825463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.716 [2024-11-15 12:31:50.901106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.716 [2024-11-15 12:31:50.962827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.716 [2024-11-15 12:31:50.962894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.716 [2024-11-15 12:31:50.962924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.716 [2024-11-15 12:31:50.962936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.716 [2024-11-15 12:31:50.962947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.716 [2024-11-15 12:31:50.964655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.716 [2024-11-15 12:31:50.964731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.716 [2024-11-15 12:31:50.964782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.716 [2024-11-15 12:31:50.964790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.975 [2024-11-15 12:31:51.118891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.975 Malloc1 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.975 [2024-11-15 12:31:51.305051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.975 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.233 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.233 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:11.233 { 00:10:11.233 "name": "Malloc1", 00:10:11.233 "aliases": [ 00:10:11.233 "e2e08cad-d123-45ec-a818-e3b95e761bd9" 00:10:11.233 ], 00:10:11.233 "product_name": "Malloc disk", 00:10:11.233 "block_size": 512, 00:10:11.233 "num_blocks": 1048576, 00:10:11.233 "uuid": "e2e08cad-d123-45ec-a818-e3b95e761bd9", 00:10:11.233 "assigned_rate_limits": { 00:10:11.233 "rw_ios_per_sec": 0, 00:10:11.233 "rw_mbytes_per_sec": 0, 00:10:11.233 "r_mbytes_per_sec": 0, 00:10:11.233 "w_mbytes_per_sec": 0 00:10:11.233 }, 00:10:11.233 "claimed": true, 00:10:11.233 "claim_type": "exclusive_write", 00:10:11.233 "zoned": false, 00:10:11.233 "supported_io_types": { 00:10:11.233 "read": true, 00:10:11.233 "write": true, 00:10:11.233 "unmap": true, 00:10:11.233 "flush": true, 00:10:11.233 "reset": true, 00:10:11.233 "nvme_admin": false, 00:10:11.233 "nvme_io": false, 00:10:11.233 "nvme_io_md": false, 00:10:11.233 "write_zeroes": true, 00:10:11.233 "zcopy": true, 00:10:11.233 "get_zone_info": false, 00:10:11.233 "zone_management": false, 00:10:11.233 "zone_append": false, 00:10:11.233 "compare": false, 00:10:11.233 "compare_and_write": false, 00:10:11.233 "abort": true, 00:10:11.233 "seek_hole": false, 00:10:11.233 "seek_data": false, 00:10:11.233 "copy": true, 00:10:11.233 "nvme_iov_md": false 00:10:11.233 }, 00:10:11.233 "memory_domains": [ 00:10:11.233 { 00:10:11.233 "dma_device_id": "system", 00:10:11.233 "dma_device_type": 1 00:10:11.233 }, 00:10:11.233 { 00:10:11.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.233 "dma_device_type": 2 00:10:11.233 } 00:10:11.233 ], 00:10:11.233 "driver_specific": {} 00:10:11.233 } 00:10:11.233 ]' 00:10:11.233 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:11.233 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:11.233 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:11.233 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:11.233 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:11.233 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:11.233 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:11.233 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:11.799 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:11.799 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:11.799 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.799 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:11.799 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:14.337 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.708 ************************************ 00:10:15.708 START TEST filesystem_ext4 00:10:15.708 ************************************ 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:15.708 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:15.708 mke2fs 1.47.0 (5-Feb-2023) 00:10:15.708 Discarding device blocks: 0/522240 done 00:10:15.708 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:15.708 Filesystem UUID: 84467ec8-46f3-441f-97ca-fb7d36e600e3 00:10:15.708 Superblock backups stored on blocks: 00:10:15.708 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:15.708 00:10:15.708 Allocating group tables: 0/64 done 00:10:15.708 Writing inode tables: 0/64 done 00:10:15.708 Creating journal (8192 blocks): done 00:10:16.224 Writing superblocks and filesystem accounting information: 0/64 done 00:10:16.224 00:10:16.224 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:16.224 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:22.776 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 960126 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:22.776 00:10:22.776 real 0m6.354s 00:10:22.776 user 0m0.020s 00:10:22.776 sys 0m0.057s 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:22.776 ************************************ 00:10:22.776 END TEST filesystem_ext4 00:10:22.776 ************************************ 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.776 ************************************ 00:10:22.776 START TEST filesystem_btrfs 00:10:22.776 ************************************ 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:22.776 btrfs-progs v6.8.1 00:10:22.776 See https://btrfs.readthedocs.io for more information. 00:10:22.776 00:10:22.776 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:22.776 NOTE: several default settings have changed in version 5.15, please make sure 00:10:22.776 this does not affect your deployments: 00:10:22.776 - DUP for metadata (-m dup) 00:10:22.776 - enabled no-holes (-O no-holes) 00:10:22.776 - enabled free-space-tree (-R free-space-tree) 00:10:22.776 00:10:22.776 Label: (null) 00:10:22.776 UUID: 541cd3af-312e-4cab-b439-4b858efe99b9 00:10:22.776 Node size: 16384 00:10:22.776 Sector size: 4096 (CPU page size: 4096) 00:10:22.776 Filesystem size: 510.00MiB 00:10:22.776 Block group profiles: 00:10:22.776 Data: single 8.00MiB 00:10:22.776 Metadata: DUP 32.00MiB 00:10:22.776 System: DUP 8.00MiB 00:10:22.776 SSD detected: yes 00:10:22.776 Zoned device: no 00:10:22.776 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:22.776 Checksum: crc32c 00:10:22.776 Number of devices: 1 00:10:22.776 Devices: 00:10:22.776 ID SIZE PATH 00:10:22.776 1 510.00MiB /dev/nvme0n1p1 00:10:22.776 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 960126 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:22.776 00:10:22.776 real 0m0.869s 00:10:22.776 user 0m0.014s 00:10:22.776 sys 0m0.109s 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:22.776 ************************************ 00:10:22.776 END TEST filesystem_btrfs 00:10:22.776 ************************************ 00:10:22.776 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:22.777 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:22.777 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.777 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.777 ************************************ 00:10:22.777 START TEST filesystem_xfs 00:10:22.777 ************************************ 00:10:22.777 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:22.777 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:22.777 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:22.777 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:22.777 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:22.777 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:22.777 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:22.777 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:22.777 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:22.777 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:22.777 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:23.035 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:23.035 = sectsz=512 attr=2, projid32bit=1 00:10:23.035 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:23.035 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:23.035 data = bsize=4096 blocks=130560, imaxpct=25 00:10:23.035 = sunit=0 swidth=0 blks 00:10:23.035 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:23.035 log =internal log bsize=4096 blocks=16384, version=2 00:10:23.035 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:23.035 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:23.968 Discarding blocks...Done. 00:10:23.968 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:23.968 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:25.873 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:25.873 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:25.873 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:25.873 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:25.873 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:25.873 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:25.873 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 960126 00:10:25.873 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:25.873 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:25.873 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:25.874 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:25.874 00:10:25.874 real 0m2.944s 00:10:25.874 user 0m0.022s 00:10:25.874 sys 0m0.053s 00:10:25.874 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.874 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:25.874 ************************************ 00:10:25.874 END TEST filesystem_xfs 00:10:25.874 ************************************ 00:10:25.874 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:26.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 960126 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 960126 ']' 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 960126 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 960126 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 960126' 00:10:26.132 killing process with pid 960126 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 960126 00:10:26.132 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 960126 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:26.697 00:10:26.697 real 0m16.115s 00:10:26.697 user 1m2.280s 00:10:26.697 sys 0m2.116s 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.697 ************************************ 00:10:26.697 END TEST nvmf_filesystem_no_in_capsule 00:10:26.697 ************************************ 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:26.697 ************************************ 00:10:26.697 START TEST nvmf_filesystem_in_capsule 00:10:26.697 ************************************ 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=962223 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 962223 00:10:26.697 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 962223 ']' 00:10:26.698 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.698 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.698 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.698 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.698 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.698 [2024-11-15 12:32:06.998660] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:10:26.698 [2024-11-15 12:32:06.998770] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.956 [2024-11-15 12:32:07.073031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.956 [2024-11-15 12:32:07.128364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.956 [2024-11-15 12:32:07.128422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.956 [2024-11-15 12:32:07.128456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.956 [2024-11-15 12:32:07.128467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.956 [2024-11-15 12:32:07.128476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.956 [2024-11-15 12:32:07.129981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.956 [2024-11-15 12:32:07.130035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.956 [2024-11-15 12:32:07.130116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.956 [2024-11-15 12:32:07.130112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.956 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.956 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:26.956 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:26.956 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:26.956 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.956 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.956 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:26.956 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:26.956 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.956 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.956 [2024-11-15 12:32:07.266815] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.956 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.956 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:26.956 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.956 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.214 Malloc1 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.214 [2024-11-15 12:32:07.457891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.214 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:27.214 { 00:10:27.215 "name": "Malloc1", 00:10:27.215 "aliases": [ 00:10:27.215 "be799da5-5ff7-4bc1-b967-3c880f4010a3" 00:10:27.215 ], 00:10:27.215 "product_name": "Malloc disk", 00:10:27.215 "block_size": 512, 00:10:27.215 "num_blocks": 1048576, 00:10:27.215 "uuid": "be799da5-5ff7-4bc1-b967-3c880f4010a3", 00:10:27.215 "assigned_rate_limits": { 00:10:27.215 "rw_ios_per_sec": 0, 00:10:27.215 "rw_mbytes_per_sec": 0, 00:10:27.215 "r_mbytes_per_sec": 0, 00:10:27.215 "w_mbytes_per_sec": 0 00:10:27.215 }, 00:10:27.215 "claimed": true, 00:10:27.215 "claim_type": "exclusive_write", 00:10:27.215 "zoned": false, 00:10:27.215 "supported_io_types": { 00:10:27.215 "read": true, 00:10:27.215 "write": true, 00:10:27.215 "unmap": true, 00:10:27.215 "flush": true, 00:10:27.215 "reset": true, 00:10:27.215 "nvme_admin": false, 00:10:27.215 "nvme_io": false, 00:10:27.215 "nvme_io_md": false, 00:10:27.215 "write_zeroes": true, 00:10:27.215 "zcopy": true, 00:10:27.215 "get_zone_info": false, 00:10:27.215 "zone_management": false, 00:10:27.215 "zone_append": false, 00:10:27.215 "compare": false, 00:10:27.215 "compare_and_write": false, 00:10:27.215 "abort": true, 00:10:27.215 "seek_hole": false, 00:10:27.215 "seek_data": false, 00:10:27.215 "copy": true, 00:10:27.215 "nvme_iov_md": false 00:10:27.215 }, 00:10:27.215 "memory_domains": [ 00:10:27.215 { 00:10:27.215 "dma_device_id": "system", 00:10:27.215 "dma_device_type": 1 00:10:27.215 }, 00:10:27.215 { 00:10:27.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.215 "dma_device_type": 2 00:10:27.215 } 00:10:27.215 ], 00:10:27.215 "driver_specific": {} 00:10:27.215 } 00:10:27.215 ]' 00:10:27.215 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:27.215 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:27.215 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:27.215 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:27.215 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:27.215 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:27.215 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:27.215 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:28.149 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:28.149 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:28.149 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:28.149 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:28.149 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:30.047 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:30.305 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:30.870 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.243 ************************************ 00:10:32.243 START TEST filesystem_in_capsule_ext4 00:10:32.243 ************************************ 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:32.243 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:32.243 mke2fs 1.47.0 (5-Feb-2023) 00:10:32.243 Discarding device blocks: 0/522240 done 00:10:32.243 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:32.243 Filesystem UUID: e773153a-94b0-47df-8b92-684a463c0c87 00:10:32.243 Superblock backups stored on blocks: 00:10:32.243 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:32.243 00:10:32.243 Allocating group tables: 0/64 done 00:10:32.243 Writing inode tables: 0/64 done 00:10:32.808 Creating journal (8192 blocks): done 00:10:33.630 Writing superblocks and filesystem accounting information: 0/64 done 00:10:33.630 00:10:33.630 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:33.630 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 962223 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:40.182 00:10:40.182 real 0m7.374s 00:10:40.182 user 0m0.021s 00:10:40.182 sys 0m0.061s 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:40.182 ************************************ 00:10:40.182 END TEST filesystem_in_capsule_ext4 00:10:40.182 ************************************ 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.182 ************************************ 00:10:40.182 START TEST filesystem_in_capsule_btrfs 00:10:40.182 ************************************ 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:40.182 btrfs-progs v6.8.1 00:10:40.182 See https://btrfs.readthedocs.io for more information. 00:10:40.182 00:10:40.182 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:40.182 NOTE: several default settings have changed in version 5.15, please make sure 00:10:40.182 this does not affect your deployments: 00:10:40.182 - DUP for metadata (-m dup) 00:10:40.182 - enabled no-holes (-O no-holes) 00:10:40.182 - enabled free-space-tree (-R free-space-tree) 00:10:40.182 00:10:40.182 Label: (null) 00:10:40.182 UUID: fd745e08-f074-49d2-be58-31adc825129f 00:10:40.182 Node size: 16384 00:10:40.182 Sector size: 4096 (CPU page size: 4096) 00:10:40.182 Filesystem size: 510.00MiB 00:10:40.182 Block group profiles: 00:10:40.182 Data: single 8.00MiB 00:10:40.182 Metadata: DUP 32.00MiB 00:10:40.182 System: DUP 8.00MiB 00:10:40.182 SSD detected: yes 00:10:40.182 Zoned device: no 00:10:40.182 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:40.182 Checksum: crc32c 00:10:40.182 Number of devices: 1 00:10:40.182 Devices: 00:10:40.182 ID SIZE PATH 00:10:40.182 1 510.00MiB /dev/nvme0n1p1 00:10:40.182 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:40.182 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:40.182 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:40.182 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:40.182 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:40.182 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:40.182 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:40.182 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:40.182 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 962223 00:10:40.182 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:40.182 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:40.182 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:40.182 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:40.182 00:10:40.182 real 0m0.784s 00:10:40.182 user 0m0.015s 00:10:40.182 sys 0m0.102s 00:10:40.182 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.182 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:40.182 ************************************ 00:10:40.182 END TEST filesystem_in_capsule_btrfs 00:10:40.183 ************************************ 00:10:40.183 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:40.183 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:40.183 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.183 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.183 ************************************ 00:10:40.183 START TEST filesystem_in_capsule_xfs 00:10:40.183 ************************************ 00:10:40.183 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:40.183 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:40.183 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:40.183 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:40.183 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:40.183 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:40.183 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:40.183 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:40.183 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:40.183 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:40.183 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:40.441 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:40.441 = sectsz=512 attr=2, projid32bit=1 00:10:40.441 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:40.441 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:40.441 data = bsize=4096 blocks=130560, imaxpct=25 00:10:40.441 = sunit=0 swidth=0 blks 00:10:40.441 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:40.441 log =internal log bsize=4096 blocks=16384, version=2 00:10:40.441 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:40.441 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:41.374 Discarding blocks...Done. 00:10:41.374 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:41.374 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 962223 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:43.271 00:10:43.271 real 0m2.913s 00:10:43.271 user 0m0.015s 00:10:43.271 sys 0m0.060s 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:43.271 ************************************ 00:10:43.271 END TEST filesystem_in_capsule_xfs 00:10:43.271 ************************************ 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:43.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 962223 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 962223 ']' 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 962223 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 962223 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 962223' 00:10:43.271 killing process with pid 962223 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 962223 00:10:43.271 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 962223 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:43.837 00:10:43.837 real 0m17.077s 00:10:43.837 user 1m5.916s 00:10:43.837 sys 0m2.302s 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.837 ************************************ 00:10:43.837 END TEST nvmf_filesystem_in_capsule 00:10:43.837 ************************************ 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.837 rmmod nvme_tcp 00:10:43.837 rmmod nvme_fabrics 00:10:43.837 rmmod nvme_keyring 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.837 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:46.374 00:10:46.374 real 0m38.110s 00:10:46.374 user 2m9.290s 00:10:46.374 sys 0m6.261s 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:46.374 ************************************ 00:10:46.374 END TEST nvmf_filesystem 00:10:46.374 ************************************ 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:46.374 ************************************ 00:10:46.374 START TEST nvmf_target_discovery 00:10:46.374 ************************************ 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:46.374 * Looking for test storage... 00:10:46.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.374 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:46.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.375 --rc genhtml_branch_coverage=1 00:10:46.375 --rc genhtml_function_coverage=1 00:10:46.375 --rc genhtml_legend=1 00:10:46.375 --rc geninfo_all_blocks=1 00:10:46.375 --rc geninfo_unexecuted_blocks=1 00:10:46.375 00:10:46.375 ' 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:46.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.375 --rc genhtml_branch_coverage=1 00:10:46.375 --rc genhtml_function_coverage=1 00:10:46.375 --rc genhtml_legend=1 00:10:46.375 --rc geninfo_all_blocks=1 00:10:46.375 --rc geninfo_unexecuted_blocks=1 00:10:46.375 00:10:46.375 ' 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:46.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.375 --rc genhtml_branch_coverage=1 00:10:46.375 --rc genhtml_function_coverage=1 00:10:46.375 --rc genhtml_legend=1 00:10:46.375 --rc geninfo_all_blocks=1 00:10:46.375 --rc geninfo_unexecuted_blocks=1 00:10:46.375 00:10:46.375 ' 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:46.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.375 --rc genhtml_branch_coverage=1 00:10:46.375 --rc genhtml_function_coverage=1 00:10:46.375 --rc genhtml_legend=1 00:10:46.375 --rc geninfo_all_blocks=1 00:10:46.375 --rc geninfo_unexecuted_blocks=1 00:10:46.375 00:10:46.375 ' 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.375 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.280 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.280 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:48.280 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:48.280 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:48.280 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:48.281 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:48.281 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:48.281 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:48.281 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.281 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:48.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:10:48.282 00:10:48.282 --- 10.0.0.2 ping statistics --- 00:10:48.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.282 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:10:48.282 00:10:48.282 --- 10.0.0.1 ping statistics --- 00:10:48.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.282 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=966383 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 966383 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 966383 ']' 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.282 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.541 [2024-11-15 12:32:28.667400] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:10:48.541 [2024-11-15 12:32:28.667475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.541 [2024-11-15 12:32:28.738278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.541 [2024-11-15 12:32:28.799378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.541 [2024-11-15 12:32:28.799439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.541 [2024-11-15 12:32:28.799469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.541 [2024-11-15 12:32:28.799481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.541 [2024-11-15 12:32:28.799491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.541 [2024-11-15 12:32:28.801091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.541 [2024-11-15 12:32:28.801149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.541 [2024-11-15 12:32:28.801216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.541 [2024-11-15 12:32:28.801219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 [2024-11-15 12:32:28.958638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 Null1 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.800 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 [2024-11-15 12:32:28.999017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 Null2 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 Null3 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 Null4 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.800 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.801 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:49.059 00:10:49.059 Discovery Log Number of Records 6, Generation counter 6 00:10:49.059 =====Discovery Log Entry 0====== 00:10:49.059 trtype: tcp 00:10:49.059 adrfam: ipv4 00:10:49.059 subtype: current discovery subsystem 00:10:49.059 treq: not required 00:10:49.059 portid: 0 00:10:49.059 trsvcid: 4420 00:10:49.059 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:49.059 traddr: 10.0.0.2 00:10:49.059 eflags: explicit discovery connections, duplicate discovery information 00:10:49.059 sectype: none 00:10:49.059 =====Discovery Log Entry 1====== 00:10:49.059 trtype: tcp 00:10:49.059 adrfam: ipv4 00:10:49.059 subtype: nvme subsystem 00:10:49.059 treq: not required 00:10:49.059 portid: 0 00:10:49.059 trsvcid: 4420 00:10:49.059 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:49.059 traddr: 10.0.0.2 00:10:49.059 eflags: none 00:10:49.059 sectype: none 00:10:49.059 =====Discovery Log Entry 2====== 00:10:49.059 trtype: tcp 00:10:49.059 adrfam: ipv4 00:10:49.059 subtype: nvme subsystem 00:10:49.059 treq: not required 00:10:49.059 portid: 0 00:10:49.059 trsvcid: 4420 00:10:49.059 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:49.059 traddr: 10.0.0.2 00:10:49.059 eflags: none 00:10:49.059 sectype: none 00:10:49.059 =====Discovery Log Entry 3====== 00:10:49.059 trtype: tcp 00:10:49.059 adrfam: ipv4 00:10:49.059 subtype: nvme subsystem 00:10:49.059 treq: not required 00:10:49.059 portid: 0 00:10:49.059 trsvcid: 4420 00:10:49.059 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:49.059 traddr: 10.0.0.2 00:10:49.059 eflags: none 00:10:49.059 sectype: none 00:10:49.059 =====Discovery Log Entry 4====== 00:10:49.059 trtype: tcp 00:10:49.059 adrfam: ipv4 00:10:49.059 subtype: nvme subsystem 00:10:49.059 treq: not required 00:10:49.059 portid: 0 00:10:49.059 trsvcid: 4420 00:10:49.059 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:49.059 traddr: 10.0.0.2 00:10:49.059 eflags: none 00:10:49.059 sectype: none 00:10:49.059 =====Discovery Log Entry 5====== 00:10:49.059 trtype: tcp 00:10:49.059 adrfam: ipv4 00:10:49.059 subtype: discovery subsystem referral 00:10:49.059 treq: not required 00:10:49.059 portid: 0 00:10:49.059 trsvcid: 4430 00:10:49.059 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:49.059 traddr: 10.0.0.2 00:10:49.059 eflags: none 00:10:49.059 sectype: none 00:10:49.059 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:49.059 Perform nvmf subsystem discovery via RPC 00:10:49.059 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:49.059 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.059 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:49.059 [ 00:10:49.059 { 00:10:49.059 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:49.059 "subtype": "Discovery", 00:10:49.059 "listen_addresses": [ 00:10:49.059 { 00:10:49.059 "trtype": "TCP", 00:10:49.059 "adrfam": "IPv4", 00:10:49.059 "traddr": "10.0.0.2", 00:10:49.059 "trsvcid": "4420" 00:10:49.059 } 00:10:49.059 ], 00:10:49.059 "allow_any_host": true, 00:10:49.059 "hosts": [] 00:10:49.059 }, 00:10:49.059 { 00:10:49.059 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:49.059 "subtype": "NVMe", 00:10:49.059 "listen_addresses": [ 00:10:49.059 { 00:10:49.059 "trtype": "TCP", 00:10:49.059 "adrfam": "IPv4", 00:10:49.059 "traddr": "10.0.0.2", 00:10:49.059 "trsvcid": "4420" 00:10:49.059 } 00:10:49.059 ], 00:10:49.059 "allow_any_host": true, 00:10:49.059 "hosts": [], 00:10:49.059 "serial_number": "SPDK00000000000001", 00:10:49.059 "model_number": "SPDK bdev Controller", 00:10:49.059 "max_namespaces": 32, 00:10:49.059 "min_cntlid": 1, 00:10:49.059 "max_cntlid": 65519, 00:10:49.059 "namespaces": [ 00:10:49.059 { 00:10:49.059 "nsid": 1, 00:10:49.059 "bdev_name": "Null1", 00:10:49.059 "name": "Null1", 00:10:49.059 "nguid": "824BEF15A59542718F5E85437340B724", 00:10:49.059 "uuid": "824bef15-a595-4271-8f5e-85437340b724" 00:10:49.059 } 00:10:49.059 ] 00:10:49.059 }, 00:10:49.060 { 00:10:49.060 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:49.060 "subtype": "NVMe", 00:10:49.060 "listen_addresses": [ 00:10:49.060 { 00:10:49.060 "trtype": "TCP", 00:10:49.060 "adrfam": "IPv4", 00:10:49.060 "traddr": "10.0.0.2", 00:10:49.060 "trsvcid": "4420" 00:10:49.060 } 00:10:49.060 ], 00:10:49.060 "allow_any_host": true, 00:10:49.060 "hosts": [], 00:10:49.060 "serial_number": "SPDK00000000000002", 00:10:49.060 "model_number": "SPDK bdev Controller", 00:10:49.060 "max_namespaces": 32, 00:10:49.060 "min_cntlid": 1, 00:10:49.060 "max_cntlid": 65519, 00:10:49.060 "namespaces": [ 00:10:49.060 { 00:10:49.060 "nsid": 1, 00:10:49.060 "bdev_name": "Null2", 00:10:49.060 "name": "Null2", 00:10:49.060 "nguid": "5865A2B7650E403D890B1262398B6F10", 00:10:49.060 "uuid": "5865a2b7-650e-403d-890b-1262398b6f10" 00:10:49.060 } 00:10:49.060 ] 00:10:49.060 }, 00:10:49.060 { 00:10:49.060 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:49.060 "subtype": "NVMe", 00:10:49.060 "listen_addresses": [ 00:10:49.060 { 00:10:49.060 "trtype": "TCP", 00:10:49.060 "adrfam": "IPv4", 00:10:49.060 "traddr": "10.0.0.2", 00:10:49.060 "trsvcid": "4420" 00:10:49.060 } 00:10:49.060 ], 00:10:49.060 "allow_any_host": true, 00:10:49.060 "hosts": [], 00:10:49.060 "serial_number": "SPDK00000000000003", 00:10:49.060 "model_number": "SPDK bdev Controller", 00:10:49.060 "max_namespaces": 32, 00:10:49.060 "min_cntlid": 1, 00:10:49.060 "max_cntlid": 65519, 00:10:49.060 "namespaces": [ 00:10:49.060 { 00:10:49.060 "nsid": 1, 00:10:49.060 "bdev_name": "Null3", 00:10:49.060 "name": "Null3", 00:10:49.060 "nguid": "5BF579B8DE674C198A911F9292B830D1", 00:10:49.060 "uuid": "5bf579b8-de67-4c19-8a91-1f9292b830d1" 00:10:49.060 } 00:10:49.060 ] 00:10:49.060 }, 00:10:49.060 { 00:10:49.060 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:49.060 "subtype": "NVMe", 00:10:49.060 "listen_addresses": [ 00:10:49.060 { 00:10:49.060 "trtype": "TCP", 00:10:49.060 "adrfam": "IPv4", 00:10:49.060 "traddr": "10.0.0.2", 00:10:49.060 "trsvcid": "4420" 00:10:49.060 } 00:10:49.060 ], 00:10:49.060 "allow_any_host": true, 00:10:49.060 "hosts": [], 00:10:49.060 "serial_number": "SPDK00000000000004", 00:10:49.060 "model_number": "SPDK bdev Controller", 00:10:49.060 "max_namespaces": 32, 00:10:49.060 "min_cntlid": 1, 00:10:49.060 "max_cntlid": 65519, 00:10:49.060 "namespaces": [ 00:10:49.060 { 00:10:49.060 "nsid": 1, 00:10:49.060 "bdev_name": "Null4", 00:10:49.060 "name": "Null4", 00:10:49.060 "nguid": "4B9B34E0D4F14F0AA1B03E8174CE940B", 00:10:49.060 "uuid": "4b9b34e0-d4f1-4f0a-a1b0-3e8174ce940b" 00:10:49.060 } 00:10:49.060 ] 00:10:49.060 } 00:10:49.060 ] 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:49.060 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:49.060 rmmod nvme_tcp 00:10:49.319 rmmod nvme_fabrics 00:10:49.319 rmmod nvme_keyring 00:10:49.319 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:49.319 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:49.319 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:49.319 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 966383 ']' 00:10:49.319 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 966383 00:10:49.319 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 966383 ']' 00:10:49.319 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 966383 00:10:49.319 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:49.319 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.319 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 966383 00:10:49.319 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.319 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.319 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 966383' 00:10:49.319 killing process with pid 966383 00:10:49.319 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 966383 00:10:49.319 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 966383 00:10:49.579 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:49.579 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:49.579 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:49.579 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:49.579 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:49.579 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:49.579 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:49.579 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:49.579 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:49.579 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.579 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.579 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.490 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:51.490 00:10:51.490 real 0m5.535s 00:10:51.490 user 0m4.474s 00:10:51.490 sys 0m1.954s 00:10:51.490 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.490 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.490 ************************************ 00:10:51.490 END TEST nvmf_target_discovery 00:10:51.490 ************************************ 00:10:51.490 12:32:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:51.490 12:32:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:51.490 12:32:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.490 12:32:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:51.490 ************************************ 00:10:51.490 START TEST nvmf_referrals 00:10:51.490 ************************************ 00:10:51.490 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:51.750 * Looking for test storage... 00:10:51.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:51.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.750 --rc genhtml_branch_coverage=1 00:10:51.750 --rc genhtml_function_coverage=1 00:10:51.750 --rc genhtml_legend=1 00:10:51.750 --rc geninfo_all_blocks=1 00:10:51.750 --rc geninfo_unexecuted_blocks=1 00:10:51.750 00:10:51.750 ' 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:51.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.750 --rc genhtml_branch_coverage=1 00:10:51.750 --rc genhtml_function_coverage=1 00:10:51.750 --rc genhtml_legend=1 00:10:51.750 --rc geninfo_all_blocks=1 00:10:51.750 --rc geninfo_unexecuted_blocks=1 00:10:51.750 00:10:51.750 ' 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:51.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.750 --rc genhtml_branch_coverage=1 00:10:51.750 --rc genhtml_function_coverage=1 00:10:51.750 --rc genhtml_legend=1 00:10:51.750 --rc geninfo_all_blocks=1 00:10:51.750 --rc geninfo_unexecuted_blocks=1 00:10:51.750 00:10:51.750 ' 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:51.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.750 --rc genhtml_branch_coverage=1 00:10:51.750 --rc genhtml_function_coverage=1 00:10:51.750 --rc genhtml_legend=1 00:10:51.750 --rc geninfo_all_blocks=1 00:10:51.750 --rc geninfo_unexecuted_blocks=1 00:10:51.750 00:10:51.750 ' 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.750 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:51.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:51.751 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:54.285 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:54.285 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:54.285 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:54.285 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:54.285 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:54.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:10:54.286 00:10:54.286 --- 10.0.0.2 ping statistics --- 00:10:54.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.286 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:54.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:10:54.286 00:10:54.286 --- 10.0.0.1 ping statistics --- 00:10:54.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.286 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=968474 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 968474 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 968474 ']' 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.286 [2024-11-15 12:32:34.304410] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:10:54.286 [2024-11-15 12:32:34.304489] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.286 [2024-11-15 12:32:34.380052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.286 [2024-11-15 12:32:34.439709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.286 [2024-11-15 12:32:34.439780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.286 [2024-11-15 12:32:34.439810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.286 [2024-11-15 12:32:34.439822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.286 [2024-11-15 12:32:34.439833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.286 [2024-11-15 12:32:34.441544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.286 [2024-11-15 12:32:34.441611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.286 [2024-11-15 12:32:34.441660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.286 [2024-11-15 12:32:34.441663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.286 [2024-11-15 12:32:34.587290] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.286 [2024-11-15 12:32:34.599501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.286 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:54.544 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:54.802 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:55.061 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:55.319 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:55.319 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:55.319 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:55.319 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:55.319 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:55.319 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:55.319 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:55.319 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:55.319 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:55.319 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:55.319 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:55.319 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:55.319 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:55.577 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:55.836 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:55.836 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:55.836 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:55.836 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:55.836 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:55.836 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:55.836 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:56.094 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:56.352 rmmod nvme_tcp 00:10:56.352 rmmod nvme_fabrics 00:10:56.352 rmmod nvme_keyring 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 968474 ']' 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 968474 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 968474 ']' 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 968474 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 968474 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 968474' 00:10:56.352 killing process with pid 968474 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 968474 00:10:56.352 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 968474 00:10:56.612 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:56.612 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:56.612 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:56.612 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:56.612 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:56.612 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:56.612 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:56.612 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:56.612 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:56.612 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.612 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.612 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.151 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:59.151 00:10:59.151 real 0m7.161s 00:10:59.151 user 0m11.239s 00:10:59.151 sys 0m2.360s 00:10:59.152 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.152 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:59.152 ************************************ 00:10:59.152 END TEST nvmf_referrals 00:10:59.152 ************************************ 00:10:59.152 12:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:59.152 12:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:59.152 12:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.152 12:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:59.152 ************************************ 00:10:59.152 START TEST nvmf_connect_disconnect 00:10:59.152 ************************************ 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:59.152 * Looking for test storage... 00:10:59.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:59.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.152 --rc genhtml_branch_coverage=1 00:10:59.152 --rc genhtml_function_coverage=1 00:10:59.152 --rc genhtml_legend=1 00:10:59.152 --rc geninfo_all_blocks=1 00:10:59.152 --rc geninfo_unexecuted_blocks=1 00:10:59.152 00:10:59.152 ' 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:59.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.152 --rc genhtml_branch_coverage=1 00:10:59.152 --rc genhtml_function_coverage=1 00:10:59.152 --rc genhtml_legend=1 00:10:59.152 --rc geninfo_all_blocks=1 00:10:59.152 --rc geninfo_unexecuted_blocks=1 00:10:59.152 00:10:59.152 ' 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:59.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.152 --rc genhtml_branch_coverage=1 00:10:59.152 --rc genhtml_function_coverage=1 00:10:59.152 --rc genhtml_legend=1 00:10:59.152 --rc geninfo_all_blocks=1 00:10:59.152 --rc geninfo_unexecuted_blocks=1 00:10:59.152 00:10:59.152 ' 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:59.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.152 --rc genhtml_branch_coverage=1 00:10:59.152 --rc genhtml_function_coverage=1 00:10:59.152 --rc genhtml_legend=1 00:10:59.152 --rc geninfo_all_blocks=1 00:10:59.152 --rc geninfo_unexecuted_blocks=1 00:10:59.152 00:10:59.152 ' 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.152 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.153 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.153 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:59.153 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:59.153 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:59.153 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.055 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:01.056 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:01.056 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:01.056 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:01.056 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.056 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:01.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:11:01.315 00:11:01.315 --- 10.0.0.2 ping statistics --- 00:11:01.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.315 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:11:01.315 00:11:01.315 --- 10.0.0.1 ping statistics --- 00:11:01.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.315 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=970784 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 970784 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 970784 ']' 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.315 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.315 [2024-11-15 12:32:41.544326] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:11:01.315 [2024-11-15 12:32:41.544421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.315 [2024-11-15 12:32:41.617992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.574 [2024-11-15 12:32:41.677365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.574 [2024-11-15 12:32:41.677414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.574 [2024-11-15 12:32:41.677441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.574 [2024-11-15 12:32:41.677452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.574 [2024-11-15 12:32:41.677469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.574 [2024-11-15 12:32:41.678985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.574 [2024-11-15 12:32:41.679053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.574 [2024-11-15 12:32:41.679111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.574 [2024-11-15 12:32:41.679114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.574 [2024-11-15 12:32:41.828448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.574 [2024-11-15 12:32:41.898426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:01.574 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:04.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.722 rmmod nvme_tcp 00:11:15.722 rmmod nvme_fabrics 00:11:15.722 rmmod nvme_keyring 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 970784 ']' 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 970784 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 970784 ']' 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 970784 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 970784 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 970784' 00:11:15.722 killing process with pid 970784 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 970784 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 970784 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.722 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.268 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:18.268 00:11:18.268 real 0m18.976s 00:11:18.268 user 0m56.657s 00:11:18.268 sys 0m3.514s 00:11:18.268 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.268 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.268 ************************************ 00:11:18.268 END TEST nvmf_connect_disconnect 00:11:18.268 ************************************ 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:18.268 ************************************ 00:11:18.268 START TEST nvmf_multitarget 00:11:18.268 ************************************ 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:18.268 * Looking for test storage... 00:11:18.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:18.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.268 --rc genhtml_branch_coverage=1 00:11:18.268 --rc genhtml_function_coverage=1 00:11:18.268 --rc genhtml_legend=1 00:11:18.268 --rc geninfo_all_blocks=1 00:11:18.268 --rc geninfo_unexecuted_blocks=1 00:11:18.268 00:11:18.268 ' 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:18.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.268 --rc genhtml_branch_coverage=1 00:11:18.268 --rc genhtml_function_coverage=1 00:11:18.268 --rc genhtml_legend=1 00:11:18.268 --rc geninfo_all_blocks=1 00:11:18.268 --rc geninfo_unexecuted_blocks=1 00:11:18.268 00:11:18.268 ' 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:18.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.268 --rc genhtml_branch_coverage=1 00:11:18.268 --rc genhtml_function_coverage=1 00:11:18.268 --rc genhtml_legend=1 00:11:18.268 --rc geninfo_all_blocks=1 00:11:18.268 --rc geninfo_unexecuted_blocks=1 00:11:18.268 00:11:18.268 ' 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:18.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.268 --rc genhtml_branch_coverage=1 00:11:18.268 --rc genhtml_function_coverage=1 00:11:18.268 --rc genhtml_legend=1 00:11:18.268 --rc geninfo_all_blocks=1 00:11:18.268 --rc geninfo_unexecuted_blocks=1 00:11:18.268 00:11:18.268 ' 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.268 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:18.269 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:20.175 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.175 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:20.176 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:20.176 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:20.176 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:20.176 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:20.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:11:20.436 00:11:20.436 --- 10.0.0.2 ping statistics --- 00:11:20.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.436 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:20.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:11:20.436 00:11:20.436 --- 10.0.0.1 ping statistics --- 00:11:20.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.436 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=974591 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 974591 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 974591 ']' 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.436 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:20.436 [2024-11-15 12:33:00.607048] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:11:20.436 [2024-11-15 12:33:00.607149] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.436 [2024-11-15 12:33:00.679580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.436 [2024-11-15 12:33:00.738785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.436 [2024-11-15 12:33:00.738839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.436 [2024-11-15 12:33:00.738855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.436 [2024-11-15 12:33:00.738867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.436 [2024-11-15 12:33:00.738878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.436 [2024-11-15 12:33:00.740483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.436 [2024-11-15 12:33:00.740618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.436 [2024-11-15 12:33:00.740805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.436 [2024-11-15 12:33:00.740810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.695 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.695 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:20.695 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:20.695 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:20.695 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:20.695 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.695 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:20.695 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:20.695 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:20.695 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:20.695 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:20.955 "nvmf_tgt_1" 00:11:20.955 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:20.955 "nvmf_tgt_2" 00:11:20.955 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:20.955 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:21.260 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:21.260 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:21.260 true 00:11:21.260 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:21.573 true 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.573 rmmod nvme_tcp 00:11:21.573 rmmod nvme_fabrics 00:11:21.573 rmmod nvme_keyring 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 974591 ']' 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 974591 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 974591 ']' 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 974591 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 974591 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 974591' 00:11:21.573 killing process with pid 974591 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 974591 00:11:21.573 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 974591 00:11:21.870 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:21.870 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:21.870 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:21.870 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:21.870 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:21.870 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:21.870 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:21.870 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.870 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:21.870 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.870 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.870 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.775 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:23.775 00:11:23.775 real 0m6.059s 00:11:23.775 user 0m6.972s 00:11:23.775 sys 0m2.089s 00:11:23.775 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.775 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:23.775 ************************************ 00:11:23.775 END TEST nvmf_multitarget 00:11:23.775 ************************************ 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:24.035 ************************************ 00:11:24.035 START TEST nvmf_rpc 00:11:24.035 ************************************ 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:24.035 * Looking for test storage... 00:11:24.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:24.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.035 --rc genhtml_branch_coverage=1 00:11:24.035 --rc genhtml_function_coverage=1 00:11:24.035 --rc genhtml_legend=1 00:11:24.035 --rc geninfo_all_blocks=1 00:11:24.035 --rc geninfo_unexecuted_blocks=1 00:11:24.035 00:11:24.035 ' 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:24.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.035 --rc genhtml_branch_coverage=1 00:11:24.035 --rc genhtml_function_coverage=1 00:11:24.035 --rc genhtml_legend=1 00:11:24.035 --rc geninfo_all_blocks=1 00:11:24.035 --rc geninfo_unexecuted_blocks=1 00:11:24.035 00:11:24.035 ' 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:24.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.035 --rc genhtml_branch_coverage=1 00:11:24.035 --rc genhtml_function_coverage=1 00:11:24.035 --rc genhtml_legend=1 00:11:24.035 --rc geninfo_all_blocks=1 00:11:24.035 --rc geninfo_unexecuted_blocks=1 00:11:24.035 00:11:24.035 ' 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:24.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.035 --rc genhtml_branch_coverage=1 00:11:24.035 --rc genhtml_function_coverage=1 00:11:24.035 --rc genhtml_legend=1 00:11:24.035 --rc geninfo_all_blocks=1 00:11:24.035 --rc geninfo_unexecuted_blocks=1 00:11:24.035 00:11:24.035 ' 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.035 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:24.036 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:26.572 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.572 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:26.573 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:26.573 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:26.573 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:11:26.573 00:11:26.573 --- 10.0.0.2 ping statistics --- 00:11:26.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.573 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:11:26.573 00:11:26.573 --- 10.0.0.1 ping statistics --- 00:11:26.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.573 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=976809 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 976809 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 976809 ']' 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.573 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.573 [2024-11-15 12:33:06.753258] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:11:26.573 [2024-11-15 12:33:06.753353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.573 [2024-11-15 12:33:06.836569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.573 [2024-11-15 12:33:06.897233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.573 [2024-11-15 12:33:06.897292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.573 [2024-11-15 12:33:06.897322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.573 [2024-11-15 12:33:06.897334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.573 [2024-11-15 12:33:06.897344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.573 [2024-11-15 12:33:06.899027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.573 [2024-11-15 12:33:06.899061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.573 [2024-11-15 12:33:06.899181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.573 [2024-11-15 12:33:06.899185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:26.833 "tick_rate": 2700000000, 00:11:26.833 "poll_groups": [ 00:11:26.833 { 00:11:26.833 "name": "nvmf_tgt_poll_group_000", 00:11:26.833 "admin_qpairs": 0, 00:11:26.833 "io_qpairs": 0, 00:11:26.833 "current_admin_qpairs": 0, 00:11:26.833 "current_io_qpairs": 0, 00:11:26.833 "pending_bdev_io": 0, 00:11:26.833 "completed_nvme_io": 0, 00:11:26.833 "transports": [] 00:11:26.833 }, 00:11:26.833 { 00:11:26.833 "name": "nvmf_tgt_poll_group_001", 00:11:26.833 "admin_qpairs": 0, 00:11:26.833 "io_qpairs": 0, 00:11:26.833 "current_admin_qpairs": 0, 00:11:26.833 "current_io_qpairs": 0, 00:11:26.833 "pending_bdev_io": 0, 00:11:26.833 "completed_nvme_io": 0, 00:11:26.833 "transports": [] 00:11:26.833 }, 00:11:26.833 { 00:11:26.833 "name": "nvmf_tgt_poll_group_002", 00:11:26.833 "admin_qpairs": 0, 00:11:26.833 "io_qpairs": 0, 00:11:26.833 "current_admin_qpairs": 0, 00:11:26.833 "current_io_qpairs": 0, 00:11:26.833 "pending_bdev_io": 0, 00:11:26.833 "completed_nvme_io": 0, 00:11:26.833 "transports": [] 00:11:26.833 }, 00:11:26.833 { 00:11:26.833 "name": "nvmf_tgt_poll_group_003", 00:11:26.833 "admin_qpairs": 0, 00:11:26.833 "io_qpairs": 0, 00:11:26.833 "current_admin_qpairs": 0, 00:11:26.833 "current_io_qpairs": 0, 00:11:26.833 "pending_bdev_io": 0, 00:11:26.833 "completed_nvme_io": 0, 00:11:26.833 "transports": [] 00:11:26.833 } 00:11:26.833 ] 00:11:26.833 }' 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.833 [2024-11-15 12:33:07.150800] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:26.833 "tick_rate": 2700000000, 00:11:26.833 "poll_groups": [ 00:11:26.833 { 00:11:26.833 "name": "nvmf_tgt_poll_group_000", 00:11:26.833 "admin_qpairs": 0, 00:11:26.833 "io_qpairs": 0, 00:11:26.833 "current_admin_qpairs": 0, 00:11:26.833 "current_io_qpairs": 0, 00:11:26.833 "pending_bdev_io": 0, 00:11:26.833 "completed_nvme_io": 0, 00:11:26.833 "transports": [ 00:11:26.833 { 00:11:26.833 "trtype": "TCP" 00:11:26.833 } 00:11:26.833 ] 00:11:26.833 }, 00:11:26.833 { 00:11:26.833 "name": "nvmf_tgt_poll_group_001", 00:11:26.833 "admin_qpairs": 0, 00:11:26.833 "io_qpairs": 0, 00:11:26.833 "current_admin_qpairs": 0, 00:11:26.833 "current_io_qpairs": 0, 00:11:26.833 "pending_bdev_io": 0, 00:11:26.833 "completed_nvme_io": 0, 00:11:26.833 "transports": [ 00:11:26.833 { 00:11:26.833 "trtype": "TCP" 00:11:26.833 } 00:11:26.833 ] 00:11:26.833 }, 00:11:26.833 { 00:11:26.833 "name": "nvmf_tgt_poll_group_002", 00:11:26.833 "admin_qpairs": 0, 00:11:26.833 "io_qpairs": 0, 00:11:26.833 "current_admin_qpairs": 0, 00:11:26.833 "current_io_qpairs": 0, 00:11:26.833 "pending_bdev_io": 0, 00:11:26.833 "completed_nvme_io": 0, 00:11:26.833 "transports": [ 00:11:26.833 { 00:11:26.833 "trtype": "TCP" 00:11:26.833 } 00:11:26.833 ] 00:11:26.833 }, 00:11:26.833 { 00:11:26.833 "name": "nvmf_tgt_poll_group_003", 00:11:26.833 "admin_qpairs": 0, 00:11:26.833 "io_qpairs": 0, 00:11:26.833 "current_admin_qpairs": 0, 00:11:26.833 "current_io_qpairs": 0, 00:11:26.833 "pending_bdev_io": 0, 00:11:26.833 "completed_nvme_io": 0, 00:11:26.833 "transports": [ 00:11:26.833 { 00:11:26.833 "trtype": "TCP" 00:11:26.833 } 00:11:26.833 ] 00:11:26.833 } 00:11:26.833 ] 00:11:26.833 }' 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:26.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.093 Malloc1 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.093 [2024-11-15 12:33:07.306409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:27.093 [2024-11-15 12:33:07.329024] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:27.093 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:27.093 could not add new controller: failed to write to nvme-fabrics device 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.093 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:28.026 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:28.026 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:28.026 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.026 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:28.026 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:29.927 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.927 [2024-11-15 12:33:10.250563] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:30.185 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:30.185 could not add new controller: failed to write to nvme-fabrics device 00:11:30.185 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:30.185 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:30.185 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:30.185 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:30.185 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:30.185 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.185 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.185 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.185 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.751 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.751 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:30.751 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.751 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:30.751 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:32.651 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:32.651 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:32.651 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.651 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:32.651 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.651 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:32.651 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.651 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.651 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:32.651 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:32.651 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.911 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.911 [2024-11-15 12:33:13.034165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.911 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.476 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.476 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:33.476 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.476 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:33.476 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.004 [2024-11-15 12:33:15.943628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.004 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.570 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:36.570 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:36.570 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.570 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:36.570 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.467 [2024-11-15 12:33:18.768242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.467 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.401 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:39.401 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:39.401 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.401 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:39.401 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.298 [2024-11-15 12:33:21.519180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.298 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.232 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:42.232 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:42.232 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.232 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:42.232 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:44.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.132 [2024-11-15 12:33:24.336681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.132 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.698 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.698 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:44.698 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.698 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:44.698 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:47.227 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:47.227 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:47.227 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.227 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:47.227 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.227 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:47.227 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.227 [2024-11-15 12:33:27.106318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.227 [2024-11-15 12:33:27.154390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.227 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 [2024-11-15 12:33:27.202562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 [2024-11-15 12:33:27.250733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 [2024-11-15 12:33:27.298912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:47.228 "tick_rate": 2700000000, 00:11:47.228 "poll_groups": [ 00:11:47.228 { 00:11:47.228 "name": "nvmf_tgt_poll_group_000", 00:11:47.228 "admin_qpairs": 2, 00:11:47.228 "io_qpairs": 84, 00:11:47.228 "current_admin_qpairs": 0, 00:11:47.228 "current_io_qpairs": 0, 00:11:47.228 "pending_bdev_io": 0, 00:11:47.228 "completed_nvme_io": 183, 00:11:47.228 "transports": [ 00:11:47.228 { 00:11:47.228 "trtype": "TCP" 00:11:47.228 } 00:11:47.228 ] 00:11:47.228 }, 00:11:47.228 { 00:11:47.228 "name": "nvmf_tgt_poll_group_001", 00:11:47.228 "admin_qpairs": 2, 00:11:47.228 "io_qpairs": 84, 00:11:47.228 "current_admin_qpairs": 0, 00:11:47.228 "current_io_qpairs": 0, 00:11:47.228 "pending_bdev_io": 0, 00:11:47.228 "completed_nvme_io": 192, 00:11:47.228 "transports": [ 00:11:47.228 { 00:11:47.228 "trtype": "TCP" 00:11:47.228 } 00:11:47.228 ] 00:11:47.228 }, 00:11:47.228 { 00:11:47.228 "name": "nvmf_tgt_poll_group_002", 00:11:47.228 "admin_qpairs": 1, 00:11:47.228 "io_qpairs": 84, 00:11:47.228 "current_admin_qpairs": 0, 00:11:47.228 "current_io_qpairs": 0, 00:11:47.228 "pending_bdev_io": 0, 00:11:47.228 "completed_nvme_io": 127, 00:11:47.228 "transports": [ 00:11:47.228 { 00:11:47.228 "trtype": "TCP" 00:11:47.228 } 00:11:47.228 ] 00:11:47.228 }, 00:11:47.228 { 00:11:47.228 "name": "nvmf_tgt_poll_group_003", 00:11:47.228 "admin_qpairs": 2, 00:11:47.228 "io_qpairs": 84, 00:11:47.228 "current_admin_qpairs": 0, 00:11:47.228 "current_io_qpairs": 0, 00:11:47.228 "pending_bdev_io": 0, 00:11:47.228 "completed_nvme_io": 184, 00:11:47.228 "transports": [ 00:11:47.228 { 00:11:47.228 "trtype": "TCP" 00:11:47.228 } 00:11:47.228 ] 00:11:47.228 } 00:11:47.228 ] 00:11:47.228 }' 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:47.228 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.229 rmmod nvme_tcp 00:11:47.229 rmmod nvme_fabrics 00:11:47.229 rmmod nvme_keyring 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 976809 ']' 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 976809 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 976809 ']' 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 976809 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 976809 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 976809' 00:11:47.229 killing process with pid 976809 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 976809 00:11:47.229 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 976809 00:11:47.487 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.487 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:47.487 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:47.487 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:47.487 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:47.487 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:47.487 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:47.487 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.487 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:47.487 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.487 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.487 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.024 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.024 00:11:50.024 real 0m25.690s 00:11:50.024 user 1m22.866s 00:11:50.024 sys 0m4.398s 00:11:50.024 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.024 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.024 ************************************ 00:11:50.024 END TEST nvmf_rpc 00:11:50.024 ************************************ 00:11:50.024 12:33:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:50.024 12:33:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:50.024 12:33:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.024 12:33:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:50.024 ************************************ 00:11:50.024 START TEST nvmf_invalid 00:11:50.024 ************************************ 00:11:50.024 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:50.024 * Looking for test storage... 00:11:50.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.024 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:50.024 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:11:50.024 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.024 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:50.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.025 --rc genhtml_branch_coverage=1 00:11:50.025 --rc genhtml_function_coverage=1 00:11:50.025 --rc genhtml_legend=1 00:11:50.025 --rc geninfo_all_blocks=1 00:11:50.025 --rc geninfo_unexecuted_blocks=1 00:11:50.025 00:11:50.025 ' 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:50.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.025 --rc genhtml_branch_coverage=1 00:11:50.025 --rc genhtml_function_coverage=1 00:11:50.025 --rc genhtml_legend=1 00:11:50.025 --rc geninfo_all_blocks=1 00:11:50.025 --rc geninfo_unexecuted_blocks=1 00:11:50.025 00:11:50.025 ' 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:50.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.025 --rc genhtml_branch_coverage=1 00:11:50.025 --rc genhtml_function_coverage=1 00:11:50.025 --rc genhtml_legend=1 00:11:50.025 --rc geninfo_all_blocks=1 00:11:50.025 --rc geninfo_unexecuted_blocks=1 00:11:50.025 00:11:50.025 ' 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:50.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.025 --rc genhtml_branch_coverage=1 00:11:50.025 --rc genhtml_function_coverage=1 00:11:50.025 --rc genhtml_legend=1 00:11:50.025 --rc geninfo_all_blocks=1 00:11:50.025 --rc geninfo_unexecuted_blocks=1 00:11:50.025 00:11:50.025 ' 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.025 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:50.026 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.560 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:52.561 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:52.561 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:52.561 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:52.561 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:11:52.561 00:11:52.561 --- 10.0.0.2 ping statistics --- 00:11:52.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.561 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:11:52.561 00:11:52.561 --- 10.0.0.1 ping statistics --- 00:11:52.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.561 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=981907 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 981907 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 981907 ']' 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.561 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:52.561 [2024-11-15 12:33:32.518716] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:11:52.561 [2024-11-15 12:33:32.518808] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.561 [2024-11-15 12:33:32.591788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.561 [2024-11-15 12:33:32.649658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.561 [2024-11-15 12:33:32.649722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.561 [2024-11-15 12:33:32.649738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.561 [2024-11-15 12:33:32.649763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.561 [2024-11-15 12:33:32.649774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.561 [2024-11-15 12:33:32.651272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.562 [2024-11-15 12:33:32.651330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.562 [2024-11-15 12:33:32.651397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.562 [2024-11-15 12:33:32.651400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.562 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.562 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:52.562 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.562 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.562 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:52.562 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.562 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:52.562 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10517 00:11:52.822 [2024-11-15 12:33:33.065822] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:52.822 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:52.822 { 00:11:52.822 "nqn": "nqn.2016-06.io.spdk:cnode10517", 00:11:52.822 "tgt_name": "foobar", 00:11:52.822 "method": "nvmf_create_subsystem", 00:11:52.822 "req_id": 1 00:11:52.822 } 00:11:52.822 Got JSON-RPC error response 00:11:52.822 response: 00:11:52.822 { 00:11:52.822 "code": -32603, 00:11:52.822 "message": "Unable to find target foobar" 00:11:52.822 }' 00:11:52.822 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:52.822 { 00:11:52.822 "nqn": "nqn.2016-06.io.spdk:cnode10517", 00:11:52.822 "tgt_name": "foobar", 00:11:52.822 "method": "nvmf_create_subsystem", 00:11:52.822 "req_id": 1 00:11:52.822 } 00:11:52.822 Got JSON-RPC error response 00:11:52.822 response: 00:11:52.822 { 00:11:52.822 "code": -32603, 00:11:52.822 "message": "Unable to find target foobar" 00:11:52.822 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:52.822 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:52.822 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26429 00:11:53.080 [2024-11-15 12:33:33.330700] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26429: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:53.080 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:53.080 { 00:11:53.080 "nqn": "nqn.2016-06.io.spdk:cnode26429", 00:11:53.080 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:53.080 "method": "nvmf_create_subsystem", 00:11:53.080 "req_id": 1 00:11:53.080 } 00:11:53.080 Got JSON-RPC error response 00:11:53.080 response: 00:11:53.080 { 00:11:53.080 "code": -32602, 00:11:53.080 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:53.080 }' 00:11:53.080 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:53.080 { 00:11:53.080 "nqn": "nqn.2016-06.io.spdk:cnode26429", 00:11:53.080 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:53.080 "method": "nvmf_create_subsystem", 00:11:53.080 "req_id": 1 00:11:53.080 } 00:11:53.080 Got JSON-RPC error response 00:11:53.080 response: 00:11:53.080 { 00:11:53.080 "code": -32602, 00:11:53.080 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:53.080 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:53.080 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:53.080 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode21721 00:11:53.339 [2024-11-15 12:33:33.599525] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21721: invalid model number 'SPDK_Controller' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:53.339 { 00:11:53.339 "nqn": "nqn.2016-06.io.spdk:cnode21721", 00:11:53.339 "model_number": "SPDK_Controller\u001f", 00:11:53.339 "method": "nvmf_create_subsystem", 00:11:53.339 "req_id": 1 00:11:53.339 } 00:11:53.339 Got JSON-RPC error response 00:11:53.339 response: 00:11:53.339 { 00:11:53.339 "code": -32602, 00:11:53.339 "message": "Invalid MN SPDK_Controller\u001f" 00:11:53.339 }' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:53.339 { 00:11:53.339 "nqn": "nqn.2016-06.io.spdk:cnode21721", 00:11:53.339 "model_number": "SPDK_Controller\u001f", 00:11:53.339 "method": "nvmf_create_subsystem", 00:11:53.339 "req_id": 1 00:11:53.339 } 00:11:53.339 Got JSON-RPC error response 00:11:53.339 response: 00:11:53.339 { 00:11:53.339 "code": -32602, 00:11:53.339 "message": "Invalid MN SPDK_Controller\u001f" 00:11:53.339 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:53.339 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:53.340 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:53.340 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.340 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.340 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:53.340 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:53.340 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:53.340 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.340 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.340 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:53.340 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:53.340 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:53.340 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.340 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.340 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ J == \- ]] 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'JB8bc8<`=tT_<:m*]' 00:11:53.598 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'JB8bc8<`=tT_<:m*]' nqn.2016-06.io.spdk:cnode7845 00:11:53.857 [2024-11-15 12:33:33.952773] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7845: invalid serial number 'JB8bc8<`=tT_<:m*]' 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:53.857 { 00:11:53.857 "nqn": "nqn.2016-06.io.spdk:cnode7845", 00:11:53.857 "serial_number": "JB8bc8<`=tT_<:m*]", 00:11:53.857 "method": "nvmf_create_subsystem", 00:11:53.857 "req_id": 1 00:11:53.857 } 00:11:53.857 Got JSON-RPC error response 00:11:53.857 response: 00:11:53.857 { 00:11:53.857 "code": -32602, 00:11:53.857 "message": "Invalid SN JB8bc8<`=tT_<:m*]" 00:11:53.857 }' 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:53.857 { 00:11:53.857 "nqn": "nqn.2016-06.io.spdk:cnode7845", 00:11:53.857 "serial_number": "JB8bc8<`=tT_<:m*]", 00:11:53.857 "method": "nvmf_create_subsystem", 00:11:53.857 "req_id": 1 00:11:53.857 } 00:11:53.857 Got JSON-RPC error response 00:11:53.857 response: 00:11:53.857 { 00:11:53.857 "code": -32602, 00:11:53.857 "message": "Invalid SN JB8bc8<`=tT_<:m*]" 00:11:53.857 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:53.857 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.857 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.857 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:53.857 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:53.857 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:53.857 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.857 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.857 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:53.857 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:53.857 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:53.858 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '5r{oH/st)Rsc$anx]n.0RcO=A)(i`{ .QEDqmBv,x' 00:11:53.859 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '5r{oH/st)Rsc$anx]n.0RcO=A)(i`{ .QEDqmBv,x' nqn.2016-06.io.spdk:cnode31297 00:11:54.117 [2024-11-15 12:33:34.358083] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31297: invalid model number '5r{oH/st)Rsc$anx]n.0RcO=A)(i`{ .QEDqmBv,x' 00:11:54.117 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:54.117 { 00:11:54.117 "nqn": "nqn.2016-06.io.spdk:cnode31297", 00:11:54.117 "model_number": "5r{oH/st)Rsc$anx]n.0RcO=A)(i`{ .QEDqmBv,x", 00:11:54.117 "method": "nvmf_create_subsystem", 00:11:54.117 "req_id": 1 00:11:54.117 } 00:11:54.117 Got JSON-RPC error response 00:11:54.117 response: 00:11:54.117 { 00:11:54.117 "code": -32602, 00:11:54.117 "message": "Invalid MN 5r{oH/st)Rsc$anx]n.0RcO=A)(i`{ .QEDqmBv,x" 00:11:54.117 }' 00:11:54.117 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:54.117 { 00:11:54.117 "nqn": "nqn.2016-06.io.spdk:cnode31297", 00:11:54.117 "model_number": "5r{oH/st)Rsc$anx]n.0RcO=A)(i`{ .QEDqmBv,x", 00:11:54.117 "method": "nvmf_create_subsystem", 00:11:54.117 "req_id": 1 00:11:54.117 } 00:11:54.117 Got JSON-RPC error response 00:11:54.117 response: 00:11:54.117 { 00:11:54.117 "code": -32602, 00:11:54.117 "message": "Invalid MN 5r{oH/st)Rsc$anx]n.0RcO=A)(i`{ .QEDqmBv,x" 00:11:54.117 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:54.117 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:54.376 [2024-11-15 12:33:34.627040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.376 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:54.634 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:54.634 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:54.634 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:54.634 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:54.634 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:54.892 [2024-11-15 12:33:35.188902] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:54.892 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:54.892 { 00:11:54.892 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:54.892 "listen_address": { 00:11:54.892 "trtype": "tcp", 00:11:54.892 "traddr": "", 00:11:54.892 "trsvcid": "4421" 00:11:54.892 }, 00:11:54.892 "method": "nvmf_subsystem_remove_listener", 00:11:54.892 "req_id": 1 00:11:54.892 } 00:11:54.892 Got JSON-RPC error response 00:11:54.892 response: 00:11:54.892 { 00:11:54.892 "code": -32602, 00:11:54.892 "message": "Invalid parameters" 00:11:54.892 }' 00:11:54.892 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:54.892 { 00:11:54.892 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:54.892 "listen_address": { 00:11:54.892 "trtype": "tcp", 00:11:54.892 "traddr": "", 00:11:54.892 "trsvcid": "4421" 00:11:54.892 }, 00:11:54.892 "method": "nvmf_subsystem_remove_listener", 00:11:54.892 "req_id": 1 00:11:54.892 } 00:11:54.892 Got JSON-RPC error response 00:11:54.892 response: 00:11:54.892 { 00:11:54.892 "code": -32602, 00:11:54.892 "message": "Invalid parameters" 00:11:54.892 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:54.892 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26390 -i 0 00:11:55.150 [2024-11-15 12:33:35.461740] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26390: invalid cntlid range [0-65519] 00:11:55.150 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:55.150 { 00:11:55.150 "nqn": "nqn.2016-06.io.spdk:cnode26390", 00:11:55.150 "min_cntlid": 0, 00:11:55.150 "method": "nvmf_create_subsystem", 00:11:55.150 "req_id": 1 00:11:55.150 } 00:11:55.150 Got JSON-RPC error response 00:11:55.150 response: 00:11:55.150 { 00:11:55.150 "code": -32602, 00:11:55.150 "message": "Invalid cntlid range [0-65519]" 00:11:55.150 }' 00:11:55.150 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:55.150 { 00:11:55.150 "nqn": "nqn.2016-06.io.spdk:cnode26390", 00:11:55.150 "min_cntlid": 0, 00:11:55.150 "method": "nvmf_create_subsystem", 00:11:55.150 "req_id": 1 00:11:55.150 } 00:11:55.150 Got JSON-RPC error response 00:11:55.150 response: 00:11:55.150 { 00:11:55.150 "code": -32602, 00:11:55.150 "message": "Invalid cntlid range [0-65519]" 00:11:55.150 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:55.150 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28071 -i 65520 00:11:55.409 [2024-11-15 12:33:35.734600] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28071: invalid cntlid range [65520-65519] 00:11:55.667 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:55.667 { 00:11:55.667 "nqn": "nqn.2016-06.io.spdk:cnode28071", 00:11:55.667 "min_cntlid": 65520, 00:11:55.667 "method": "nvmf_create_subsystem", 00:11:55.667 "req_id": 1 00:11:55.667 } 00:11:55.667 Got JSON-RPC error response 00:11:55.667 response: 00:11:55.667 { 00:11:55.667 "code": -32602, 00:11:55.667 "message": "Invalid cntlid range [65520-65519]" 00:11:55.667 }' 00:11:55.667 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:55.667 { 00:11:55.667 "nqn": "nqn.2016-06.io.spdk:cnode28071", 00:11:55.667 "min_cntlid": 65520, 00:11:55.667 "method": "nvmf_create_subsystem", 00:11:55.667 "req_id": 1 00:11:55.667 } 00:11:55.667 Got JSON-RPC error response 00:11:55.667 response: 00:11:55.667 { 00:11:55.667 "code": -32602, 00:11:55.667 "message": "Invalid cntlid range [65520-65519]" 00:11:55.667 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:55.667 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29629 -I 0 00:11:55.667 [2024-11-15 12:33:36.003495] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29629: invalid cntlid range [1-0] 00:11:55.925 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:55.925 { 00:11:55.925 "nqn": "nqn.2016-06.io.spdk:cnode29629", 00:11:55.925 "max_cntlid": 0, 00:11:55.925 "method": "nvmf_create_subsystem", 00:11:55.925 "req_id": 1 00:11:55.925 } 00:11:55.925 Got JSON-RPC error response 00:11:55.925 response: 00:11:55.925 { 00:11:55.925 "code": -32602, 00:11:55.925 "message": "Invalid cntlid range [1-0]" 00:11:55.925 }' 00:11:55.925 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:55.925 { 00:11:55.925 "nqn": "nqn.2016-06.io.spdk:cnode29629", 00:11:55.925 "max_cntlid": 0, 00:11:55.925 "method": "nvmf_create_subsystem", 00:11:55.925 "req_id": 1 00:11:55.925 } 00:11:55.925 Got JSON-RPC error response 00:11:55.925 response: 00:11:55.925 { 00:11:55.925 "code": -32602, 00:11:55.925 "message": "Invalid cntlid range [1-0]" 00:11:55.925 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:55.925 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28135 -I 65520 00:11:56.183 [2024-11-15 12:33:36.284412] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28135: invalid cntlid range [1-65520] 00:11:56.183 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:56.183 { 00:11:56.183 "nqn": "nqn.2016-06.io.spdk:cnode28135", 00:11:56.183 "max_cntlid": 65520, 00:11:56.183 "method": "nvmf_create_subsystem", 00:11:56.183 "req_id": 1 00:11:56.183 } 00:11:56.183 Got JSON-RPC error response 00:11:56.183 response: 00:11:56.183 { 00:11:56.183 "code": -32602, 00:11:56.183 "message": "Invalid cntlid range [1-65520]" 00:11:56.183 }' 00:11:56.183 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:56.183 { 00:11:56.183 "nqn": "nqn.2016-06.io.spdk:cnode28135", 00:11:56.183 "max_cntlid": 65520, 00:11:56.183 "method": "nvmf_create_subsystem", 00:11:56.183 "req_id": 1 00:11:56.183 } 00:11:56.183 Got JSON-RPC error response 00:11:56.183 response: 00:11:56.183 { 00:11:56.183 "code": -32602, 00:11:56.183 "message": "Invalid cntlid range [1-65520]" 00:11:56.183 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:56.183 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17855 -i 6 -I 5 00:11:56.441 [2024-11-15 12:33:36.573352] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17855: invalid cntlid range [6-5] 00:11:56.441 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:56.441 { 00:11:56.441 "nqn": "nqn.2016-06.io.spdk:cnode17855", 00:11:56.441 "min_cntlid": 6, 00:11:56.441 "max_cntlid": 5, 00:11:56.441 "method": "nvmf_create_subsystem", 00:11:56.441 "req_id": 1 00:11:56.441 } 00:11:56.441 Got JSON-RPC error response 00:11:56.441 response: 00:11:56.441 { 00:11:56.441 "code": -32602, 00:11:56.441 "message": "Invalid cntlid range [6-5]" 00:11:56.441 }' 00:11:56.441 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:56.441 { 00:11:56.441 "nqn": "nqn.2016-06.io.spdk:cnode17855", 00:11:56.441 "min_cntlid": 6, 00:11:56.441 "max_cntlid": 5, 00:11:56.441 "method": "nvmf_create_subsystem", 00:11:56.441 "req_id": 1 00:11:56.441 } 00:11:56.441 Got JSON-RPC error response 00:11:56.441 response: 00:11:56.441 { 00:11:56.441 "code": -32602, 00:11:56.441 "message": "Invalid cntlid range [6-5]" 00:11:56.441 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:56.441 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:56.441 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:56.441 { 00:11:56.441 "name": "foobar", 00:11:56.441 "method": "nvmf_delete_target", 00:11:56.441 "req_id": 1 00:11:56.441 } 00:11:56.442 Got JSON-RPC error response 00:11:56.442 response: 00:11:56.442 { 00:11:56.442 "code": -32602, 00:11:56.442 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:56.442 }' 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:56.442 { 00:11:56.442 "name": "foobar", 00:11:56.442 "method": "nvmf_delete_target", 00:11:56.442 "req_id": 1 00:11:56.442 } 00:11:56.442 Got JSON-RPC error response 00:11:56.442 response: 00:11:56.442 { 00:11:56.442 "code": -32602, 00:11:56.442 "message": "The specified target doesn't exist, cannot delete it." 00:11:56.442 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.442 rmmod nvme_tcp 00:11:56.442 rmmod nvme_fabrics 00:11:56.442 rmmod nvme_keyring 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 981907 ']' 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 981907 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 981907 ']' 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 981907 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.442 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 981907 00:11:56.703 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.703 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.703 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 981907' 00:11:56.703 killing process with pid 981907 00:11:56.703 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 981907 00:11:56.703 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 981907 00:11:56.703 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:56.703 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:56.703 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:56.703 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:11:56.703 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:11:56.703 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:56.703 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:11:56.703 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.703 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:56.703 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.703 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.703 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.240 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:59.240 00:11:59.240 real 0m9.191s 00:11:59.240 user 0m21.563s 00:11:59.240 sys 0m2.618s 00:11:59.240 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.240 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:59.240 ************************************ 00:11:59.240 END TEST nvmf_invalid 00:11:59.241 ************************************ 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.241 ************************************ 00:11:59.241 START TEST nvmf_connect_stress 00:11:59.241 ************************************ 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:59.241 * Looking for test storage... 00:11:59.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:59.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.241 --rc genhtml_branch_coverage=1 00:11:59.241 --rc genhtml_function_coverage=1 00:11:59.241 --rc genhtml_legend=1 00:11:59.241 --rc geninfo_all_blocks=1 00:11:59.241 --rc geninfo_unexecuted_blocks=1 00:11:59.241 00:11:59.241 ' 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:59.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.241 --rc genhtml_branch_coverage=1 00:11:59.241 --rc genhtml_function_coverage=1 00:11:59.241 --rc genhtml_legend=1 00:11:59.241 --rc geninfo_all_blocks=1 00:11:59.241 --rc geninfo_unexecuted_blocks=1 00:11:59.241 00:11:59.241 ' 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:59.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.241 --rc genhtml_branch_coverage=1 00:11:59.241 --rc genhtml_function_coverage=1 00:11:59.241 --rc genhtml_legend=1 00:11:59.241 --rc geninfo_all_blocks=1 00:11:59.241 --rc geninfo_unexecuted_blocks=1 00:11:59.241 00:11:59.241 ' 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:59.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.241 --rc genhtml_branch_coverage=1 00:11:59.241 --rc genhtml_function_coverage=1 00:11:59.241 --rc genhtml_legend=1 00:11:59.241 --rc geninfo_all_blocks=1 00:11:59.241 --rc geninfo_unexecuted_blocks=1 00:11:59.241 00:11:59.241 ' 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:59.241 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:59.242 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:01.147 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:01.148 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:01.148 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:01.148 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:01.148 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:01.148 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:01.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:12:01.407 00:12:01.407 --- 10.0.0.2 ping statistics --- 00:12:01.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.407 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:12:01.407 00:12:01.407 --- 10.0.0.1 ping statistics --- 00:12:01.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.407 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=984549 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 984549 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 984549 ']' 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.407 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.407 [2024-11-15 12:33:41.682904] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:12:01.407 [2024-11-15 12:33:41.683000] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.666 [2024-11-15 12:33:41.756897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:01.666 [2024-11-15 12:33:41.818147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.666 [2024-11-15 12:33:41.818197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.666 [2024-11-15 12:33:41.818225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.666 [2024-11-15 12:33:41.818237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.666 [2024-11-15 12:33:41.818247] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.666 [2024-11-15 12:33:41.819843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.666 [2024-11-15 12:33:41.819895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.666 [2024-11-15 12:33:41.819899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.666 [2024-11-15 12:33:41.968360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.666 [2024-11-15 12:33:41.985686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.666 NULL1 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=984694 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:01.666 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:01.666 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:01.666 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.666 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.666 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.666 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.925 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.183 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.183 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:02.183 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.183 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.183 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.441 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.441 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:02.441 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.441 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.441 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.699 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.699 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:02.699 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.699 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.699 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.265 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.265 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:03.265 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.265 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.265 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.522 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.523 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:03.523 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.523 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.523 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.780 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.780 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:03.780 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.780 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.780 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.038 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.038 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:04.038 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.038 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.038 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.296 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.296 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:04.296 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.296 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.296 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.872 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.872 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:04.872 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.872 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.872 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.131 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.131 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:05.131 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.131 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.131 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.388 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.388 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:05.388 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.388 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.388 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.646 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.646 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:05.646 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.646 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.646 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.904 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.904 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:05.904 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.904 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.904 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.469 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.469 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:06.469 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.469 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.469 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.727 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.727 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:06.727 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.727 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.727 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.986 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.986 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:06.986 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.986 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.986 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:07.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.244 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.501 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.502 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:07.502 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.502 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.502 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.067 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.067 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:08.067 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.067 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.067 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.325 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.325 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:08.325 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.325 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.325 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.583 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.583 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:08.583 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.583 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.583 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.841 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.841 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:08.841 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.841 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.841 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.098 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.098 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:09.098 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.098 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.098 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.664 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.664 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:09.664 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.664 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.664 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.922 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.922 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:09.922 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.922 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.922 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.179 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.179 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:10.179 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.179 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.179 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.437 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.437 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:10.437 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.437 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.437 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.003 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.003 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:11.003 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.003 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.003 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.260 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.260 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:11.260 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.260 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.260 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.518 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.518 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:11.518 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.518 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.518 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.776 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.776 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:11.776 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.776 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.776 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.033 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:12.033 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.033 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 984694 00:12:12.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (984694) - No such process 00:12:12.033 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 984694 00:12:12.033 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:12.033 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:12.033 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:12.033 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:12.033 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:12.033 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:12.033 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:12.033 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:12.033 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:12.033 rmmod nvme_tcp 00:12:12.033 rmmod nvme_fabrics 00:12:12.033 rmmod nvme_keyring 00:12:12.033 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:12.291 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:12.291 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:12.291 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 984549 ']' 00:12:12.291 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 984549 00:12:12.291 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 984549 ']' 00:12:12.291 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 984549 00:12:12.291 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:12.291 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.291 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 984549 00:12:12.291 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:12.291 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:12.291 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 984549' 00:12:12.291 killing process with pid 984549 00:12:12.291 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 984549 00:12:12.291 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 984549 00:12:12.550 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:12.550 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:12.550 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:12.550 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:12.550 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:12.550 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:12.550 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:12.550 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:12.550 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:12.550 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.550 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.550 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.524 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.524 00:12:14.524 real 0m15.569s 00:12:14.524 user 0m38.609s 00:12:14.524 sys 0m6.061s 00:12:14.524 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.524 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.524 ************************************ 00:12:14.524 END TEST nvmf_connect_stress 00:12:14.524 ************************************ 00:12:14.524 12:33:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:14.524 12:33:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.524 12:33:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.524 12:33:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.524 ************************************ 00:12:14.524 START TEST nvmf_fused_ordering 00:12:14.524 ************************************ 00:12:14.524 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:14.524 * Looking for test storage... 00:12:14.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.524 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:14.524 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:14.524 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:14.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.783 --rc genhtml_branch_coverage=1 00:12:14.783 --rc genhtml_function_coverage=1 00:12:14.783 --rc genhtml_legend=1 00:12:14.783 --rc geninfo_all_blocks=1 00:12:14.783 --rc geninfo_unexecuted_blocks=1 00:12:14.783 00:12:14.783 ' 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:14.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.783 --rc genhtml_branch_coverage=1 00:12:14.783 --rc genhtml_function_coverage=1 00:12:14.783 --rc genhtml_legend=1 00:12:14.783 --rc geninfo_all_blocks=1 00:12:14.783 --rc geninfo_unexecuted_blocks=1 00:12:14.783 00:12:14.783 ' 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:14.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.783 --rc genhtml_branch_coverage=1 00:12:14.783 --rc genhtml_function_coverage=1 00:12:14.783 --rc genhtml_legend=1 00:12:14.783 --rc geninfo_all_blocks=1 00:12:14.783 --rc geninfo_unexecuted_blocks=1 00:12:14.783 00:12:14.783 ' 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:14.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.783 --rc genhtml_branch_coverage=1 00:12:14.783 --rc genhtml_function_coverage=1 00:12:14.783 --rc genhtml_legend=1 00:12:14.783 --rc geninfo_all_blocks=1 00:12:14.783 --rc geninfo_unexecuted_blocks=1 00:12:14.783 00:12:14.783 ' 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.783 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:14.784 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:17.325 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:17.325 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:17.325 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:17.325 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:17.325 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:17.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:12:17.326 00:12:17.326 --- 10.0.0.2 ping statistics --- 00:12:17.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.326 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:12:17.326 00:12:17.326 --- 10.0.0.1 ping statistics --- 00:12:17.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.326 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=987859 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 987859 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 987859 ']' 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.326 [2024-11-15 12:33:57.347151] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:12:17.326 [2024-11-15 12:33:57.347245] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.326 [2024-11-15 12:33:57.417457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.326 [2024-11-15 12:33:57.469727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.326 [2024-11-15 12:33:57.469800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.326 [2024-11-15 12:33:57.469828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.326 [2024-11-15 12:33:57.469839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.326 [2024-11-15 12:33:57.469855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.326 [2024-11-15 12:33:57.470449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.326 [2024-11-15 12:33:57.608462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.326 [2024-11-15 12:33:57.624667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.326 NULL1 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.326 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:17.585 [2024-11-15 12:33:57.669133] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:12:17.585 [2024-11-15 12:33:57.669168] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid987879 ] 00:12:17.843 Attached to nqn.2016-06.io.spdk:cnode1 00:12:17.843 Namespace ID: 1 size: 1GB 00:12:17.843 fused_ordering(0) 00:12:17.843 fused_ordering(1) 00:12:17.843 fused_ordering(2) 00:12:17.843 fused_ordering(3) 00:12:17.843 fused_ordering(4) 00:12:17.843 fused_ordering(5) 00:12:17.843 fused_ordering(6) 00:12:17.843 fused_ordering(7) 00:12:17.843 fused_ordering(8) 00:12:17.843 fused_ordering(9) 00:12:17.843 fused_ordering(10) 00:12:17.843 fused_ordering(11) 00:12:17.843 fused_ordering(12) 00:12:17.843 fused_ordering(13) 00:12:17.843 fused_ordering(14) 00:12:17.843 fused_ordering(15) 00:12:17.843 fused_ordering(16) 00:12:17.843 fused_ordering(17) 00:12:17.843 fused_ordering(18) 00:12:17.843 fused_ordering(19) 00:12:17.843 fused_ordering(20) 00:12:17.843 fused_ordering(21) 00:12:17.843 fused_ordering(22) 00:12:17.843 fused_ordering(23) 00:12:17.843 fused_ordering(24) 00:12:17.843 fused_ordering(25) 00:12:17.843 fused_ordering(26) 00:12:17.843 fused_ordering(27) 00:12:17.843 fused_ordering(28) 00:12:17.843 fused_ordering(29) 00:12:17.843 fused_ordering(30) 00:12:17.843 fused_ordering(31) 00:12:17.843 fused_ordering(32) 00:12:17.843 fused_ordering(33) 00:12:17.843 fused_ordering(34) 00:12:17.843 fused_ordering(35) 00:12:17.843 fused_ordering(36) 00:12:17.843 fused_ordering(37) 00:12:17.843 fused_ordering(38) 00:12:17.843 fused_ordering(39) 00:12:17.843 fused_ordering(40) 00:12:17.843 fused_ordering(41) 00:12:17.843 fused_ordering(42) 00:12:17.843 fused_ordering(43) 00:12:17.843 fused_ordering(44) 00:12:17.843 fused_ordering(45) 00:12:17.843 fused_ordering(46) 00:12:17.843 fused_ordering(47) 00:12:17.843 fused_ordering(48) 00:12:17.843 fused_ordering(49) 00:12:17.843 fused_ordering(50) 00:12:17.843 fused_ordering(51) 00:12:17.843 fused_ordering(52) 00:12:17.843 fused_ordering(53) 00:12:17.843 fused_ordering(54) 00:12:17.843 fused_ordering(55) 00:12:17.843 fused_ordering(56) 00:12:17.843 fused_ordering(57) 00:12:17.843 fused_ordering(58) 00:12:17.843 fused_ordering(59) 00:12:17.843 fused_ordering(60) 00:12:17.843 fused_ordering(61) 00:12:17.843 fused_ordering(62) 00:12:17.843 fused_ordering(63) 00:12:17.843 fused_ordering(64) 00:12:17.843 fused_ordering(65) 00:12:17.843 fused_ordering(66) 00:12:17.843 fused_ordering(67) 00:12:17.843 fused_ordering(68) 00:12:17.843 fused_ordering(69) 00:12:17.843 fused_ordering(70) 00:12:17.843 fused_ordering(71) 00:12:17.843 fused_ordering(72) 00:12:17.843 fused_ordering(73) 00:12:17.843 fused_ordering(74) 00:12:17.843 fused_ordering(75) 00:12:17.843 fused_ordering(76) 00:12:17.843 fused_ordering(77) 00:12:17.843 fused_ordering(78) 00:12:17.843 fused_ordering(79) 00:12:17.843 fused_ordering(80) 00:12:17.843 fused_ordering(81) 00:12:17.843 fused_ordering(82) 00:12:17.843 fused_ordering(83) 00:12:17.843 fused_ordering(84) 00:12:17.843 fused_ordering(85) 00:12:17.843 fused_ordering(86) 00:12:17.843 fused_ordering(87) 00:12:17.843 fused_ordering(88) 00:12:17.844 fused_ordering(89) 00:12:17.844 fused_ordering(90) 00:12:17.844 fused_ordering(91) 00:12:17.844 fused_ordering(92) 00:12:17.844 fused_ordering(93) 00:12:17.844 fused_ordering(94) 00:12:17.844 fused_ordering(95) 00:12:17.844 fused_ordering(96) 00:12:17.844 fused_ordering(97) 00:12:17.844 fused_ordering(98) 00:12:17.844 fused_ordering(99) 00:12:17.844 fused_ordering(100) 00:12:17.844 fused_ordering(101) 00:12:17.844 fused_ordering(102) 00:12:17.844 fused_ordering(103) 00:12:17.844 fused_ordering(104) 00:12:17.844 fused_ordering(105) 00:12:17.844 fused_ordering(106) 00:12:17.844 fused_ordering(107) 00:12:17.844 fused_ordering(108) 00:12:17.844 fused_ordering(109) 00:12:17.844 fused_ordering(110) 00:12:17.844 fused_ordering(111) 00:12:17.844 fused_ordering(112) 00:12:17.844 fused_ordering(113) 00:12:17.844 fused_ordering(114) 00:12:17.844 fused_ordering(115) 00:12:17.844 fused_ordering(116) 00:12:17.844 fused_ordering(117) 00:12:17.844 fused_ordering(118) 00:12:17.844 fused_ordering(119) 00:12:17.844 fused_ordering(120) 00:12:17.844 fused_ordering(121) 00:12:17.844 fused_ordering(122) 00:12:17.844 fused_ordering(123) 00:12:17.844 fused_ordering(124) 00:12:17.844 fused_ordering(125) 00:12:17.844 fused_ordering(126) 00:12:17.844 fused_ordering(127) 00:12:17.844 fused_ordering(128) 00:12:17.844 fused_ordering(129) 00:12:17.844 fused_ordering(130) 00:12:17.844 fused_ordering(131) 00:12:17.844 fused_ordering(132) 00:12:17.844 fused_ordering(133) 00:12:17.844 fused_ordering(134) 00:12:17.844 fused_ordering(135) 00:12:17.844 fused_ordering(136) 00:12:17.844 fused_ordering(137) 00:12:17.844 fused_ordering(138) 00:12:17.844 fused_ordering(139) 00:12:17.844 fused_ordering(140) 00:12:17.844 fused_ordering(141) 00:12:17.844 fused_ordering(142) 00:12:17.844 fused_ordering(143) 00:12:17.844 fused_ordering(144) 00:12:17.844 fused_ordering(145) 00:12:17.844 fused_ordering(146) 00:12:17.844 fused_ordering(147) 00:12:17.844 fused_ordering(148) 00:12:17.844 fused_ordering(149) 00:12:17.844 fused_ordering(150) 00:12:17.844 fused_ordering(151) 00:12:17.844 fused_ordering(152) 00:12:17.844 fused_ordering(153) 00:12:17.844 fused_ordering(154) 00:12:17.844 fused_ordering(155) 00:12:17.844 fused_ordering(156) 00:12:17.844 fused_ordering(157) 00:12:17.844 fused_ordering(158) 00:12:17.844 fused_ordering(159) 00:12:17.844 fused_ordering(160) 00:12:17.844 fused_ordering(161) 00:12:17.844 fused_ordering(162) 00:12:17.844 fused_ordering(163) 00:12:17.844 fused_ordering(164) 00:12:17.844 fused_ordering(165) 00:12:17.844 fused_ordering(166) 00:12:17.844 fused_ordering(167) 00:12:17.844 fused_ordering(168) 00:12:17.844 fused_ordering(169) 00:12:17.844 fused_ordering(170) 00:12:17.844 fused_ordering(171) 00:12:17.844 fused_ordering(172) 00:12:17.844 fused_ordering(173) 00:12:17.844 fused_ordering(174) 00:12:17.844 fused_ordering(175) 00:12:17.844 fused_ordering(176) 00:12:17.844 fused_ordering(177) 00:12:17.844 fused_ordering(178) 00:12:17.844 fused_ordering(179) 00:12:17.844 fused_ordering(180) 00:12:17.844 fused_ordering(181) 00:12:17.844 fused_ordering(182) 00:12:17.844 fused_ordering(183) 00:12:17.844 fused_ordering(184) 00:12:17.844 fused_ordering(185) 00:12:17.844 fused_ordering(186) 00:12:17.844 fused_ordering(187) 00:12:17.844 fused_ordering(188) 00:12:17.844 fused_ordering(189) 00:12:17.844 fused_ordering(190) 00:12:17.844 fused_ordering(191) 00:12:17.844 fused_ordering(192) 00:12:17.844 fused_ordering(193) 00:12:17.844 fused_ordering(194) 00:12:17.844 fused_ordering(195) 00:12:17.844 fused_ordering(196) 00:12:17.844 fused_ordering(197) 00:12:17.844 fused_ordering(198) 00:12:17.844 fused_ordering(199) 00:12:17.844 fused_ordering(200) 00:12:17.844 fused_ordering(201) 00:12:17.844 fused_ordering(202) 00:12:17.844 fused_ordering(203) 00:12:17.844 fused_ordering(204) 00:12:17.844 fused_ordering(205) 00:12:18.102 fused_ordering(206) 00:12:18.102 fused_ordering(207) 00:12:18.102 fused_ordering(208) 00:12:18.102 fused_ordering(209) 00:12:18.102 fused_ordering(210) 00:12:18.102 fused_ordering(211) 00:12:18.102 fused_ordering(212) 00:12:18.102 fused_ordering(213) 00:12:18.102 fused_ordering(214) 00:12:18.102 fused_ordering(215) 00:12:18.102 fused_ordering(216) 00:12:18.102 fused_ordering(217) 00:12:18.102 fused_ordering(218) 00:12:18.102 fused_ordering(219) 00:12:18.102 fused_ordering(220) 00:12:18.102 fused_ordering(221) 00:12:18.102 fused_ordering(222) 00:12:18.102 fused_ordering(223) 00:12:18.102 fused_ordering(224) 00:12:18.102 fused_ordering(225) 00:12:18.102 fused_ordering(226) 00:12:18.102 fused_ordering(227) 00:12:18.102 fused_ordering(228) 00:12:18.102 fused_ordering(229) 00:12:18.102 fused_ordering(230) 00:12:18.102 fused_ordering(231) 00:12:18.102 fused_ordering(232) 00:12:18.102 fused_ordering(233) 00:12:18.102 fused_ordering(234) 00:12:18.102 fused_ordering(235) 00:12:18.102 fused_ordering(236) 00:12:18.102 fused_ordering(237) 00:12:18.102 fused_ordering(238) 00:12:18.102 fused_ordering(239) 00:12:18.102 fused_ordering(240) 00:12:18.102 fused_ordering(241) 00:12:18.102 fused_ordering(242) 00:12:18.102 fused_ordering(243) 00:12:18.102 fused_ordering(244) 00:12:18.102 fused_ordering(245) 00:12:18.102 fused_ordering(246) 00:12:18.102 fused_ordering(247) 00:12:18.102 fused_ordering(248) 00:12:18.102 fused_ordering(249) 00:12:18.102 fused_ordering(250) 00:12:18.102 fused_ordering(251) 00:12:18.102 fused_ordering(252) 00:12:18.102 fused_ordering(253) 00:12:18.102 fused_ordering(254) 00:12:18.102 fused_ordering(255) 00:12:18.102 fused_ordering(256) 00:12:18.102 fused_ordering(257) 00:12:18.102 fused_ordering(258) 00:12:18.102 fused_ordering(259) 00:12:18.102 fused_ordering(260) 00:12:18.102 fused_ordering(261) 00:12:18.102 fused_ordering(262) 00:12:18.102 fused_ordering(263) 00:12:18.102 fused_ordering(264) 00:12:18.102 fused_ordering(265) 00:12:18.102 fused_ordering(266) 00:12:18.102 fused_ordering(267) 00:12:18.102 fused_ordering(268) 00:12:18.102 fused_ordering(269) 00:12:18.102 fused_ordering(270) 00:12:18.102 fused_ordering(271) 00:12:18.102 fused_ordering(272) 00:12:18.102 fused_ordering(273) 00:12:18.102 fused_ordering(274) 00:12:18.102 fused_ordering(275) 00:12:18.102 fused_ordering(276) 00:12:18.102 fused_ordering(277) 00:12:18.102 fused_ordering(278) 00:12:18.102 fused_ordering(279) 00:12:18.103 fused_ordering(280) 00:12:18.103 fused_ordering(281) 00:12:18.103 fused_ordering(282) 00:12:18.103 fused_ordering(283) 00:12:18.103 fused_ordering(284) 00:12:18.103 fused_ordering(285) 00:12:18.103 fused_ordering(286) 00:12:18.103 fused_ordering(287) 00:12:18.103 fused_ordering(288) 00:12:18.103 fused_ordering(289) 00:12:18.103 fused_ordering(290) 00:12:18.103 fused_ordering(291) 00:12:18.103 fused_ordering(292) 00:12:18.103 fused_ordering(293) 00:12:18.103 fused_ordering(294) 00:12:18.103 fused_ordering(295) 00:12:18.103 fused_ordering(296) 00:12:18.103 fused_ordering(297) 00:12:18.103 fused_ordering(298) 00:12:18.103 fused_ordering(299) 00:12:18.103 fused_ordering(300) 00:12:18.103 fused_ordering(301) 00:12:18.103 fused_ordering(302) 00:12:18.103 fused_ordering(303) 00:12:18.103 fused_ordering(304) 00:12:18.103 fused_ordering(305) 00:12:18.103 fused_ordering(306) 00:12:18.103 fused_ordering(307) 00:12:18.103 fused_ordering(308) 00:12:18.103 fused_ordering(309) 00:12:18.103 fused_ordering(310) 00:12:18.103 fused_ordering(311) 00:12:18.103 fused_ordering(312) 00:12:18.103 fused_ordering(313) 00:12:18.103 fused_ordering(314) 00:12:18.103 fused_ordering(315) 00:12:18.103 fused_ordering(316) 00:12:18.103 fused_ordering(317) 00:12:18.103 fused_ordering(318) 00:12:18.103 fused_ordering(319) 00:12:18.103 fused_ordering(320) 00:12:18.103 fused_ordering(321) 00:12:18.103 fused_ordering(322) 00:12:18.103 fused_ordering(323) 00:12:18.103 fused_ordering(324) 00:12:18.103 fused_ordering(325) 00:12:18.103 fused_ordering(326) 00:12:18.103 fused_ordering(327) 00:12:18.103 fused_ordering(328) 00:12:18.103 fused_ordering(329) 00:12:18.103 fused_ordering(330) 00:12:18.103 fused_ordering(331) 00:12:18.103 fused_ordering(332) 00:12:18.103 fused_ordering(333) 00:12:18.103 fused_ordering(334) 00:12:18.103 fused_ordering(335) 00:12:18.103 fused_ordering(336) 00:12:18.103 fused_ordering(337) 00:12:18.103 fused_ordering(338) 00:12:18.103 fused_ordering(339) 00:12:18.103 fused_ordering(340) 00:12:18.103 fused_ordering(341) 00:12:18.103 fused_ordering(342) 00:12:18.103 fused_ordering(343) 00:12:18.103 fused_ordering(344) 00:12:18.103 fused_ordering(345) 00:12:18.103 fused_ordering(346) 00:12:18.103 fused_ordering(347) 00:12:18.103 fused_ordering(348) 00:12:18.103 fused_ordering(349) 00:12:18.103 fused_ordering(350) 00:12:18.103 fused_ordering(351) 00:12:18.103 fused_ordering(352) 00:12:18.103 fused_ordering(353) 00:12:18.103 fused_ordering(354) 00:12:18.103 fused_ordering(355) 00:12:18.103 fused_ordering(356) 00:12:18.103 fused_ordering(357) 00:12:18.103 fused_ordering(358) 00:12:18.103 fused_ordering(359) 00:12:18.103 fused_ordering(360) 00:12:18.103 fused_ordering(361) 00:12:18.103 fused_ordering(362) 00:12:18.103 fused_ordering(363) 00:12:18.103 fused_ordering(364) 00:12:18.103 fused_ordering(365) 00:12:18.103 fused_ordering(366) 00:12:18.103 fused_ordering(367) 00:12:18.103 fused_ordering(368) 00:12:18.103 fused_ordering(369) 00:12:18.103 fused_ordering(370) 00:12:18.103 fused_ordering(371) 00:12:18.103 fused_ordering(372) 00:12:18.103 fused_ordering(373) 00:12:18.103 fused_ordering(374) 00:12:18.103 fused_ordering(375) 00:12:18.103 fused_ordering(376) 00:12:18.103 fused_ordering(377) 00:12:18.103 fused_ordering(378) 00:12:18.103 fused_ordering(379) 00:12:18.103 fused_ordering(380) 00:12:18.103 fused_ordering(381) 00:12:18.103 fused_ordering(382) 00:12:18.103 fused_ordering(383) 00:12:18.103 fused_ordering(384) 00:12:18.103 fused_ordering(385) 00:12:18.103 fused_ordering(386) 00:12:18.103 fused_ordering(387) 00:12:18.103 fused_ordering(388) 00:12:18.103 fused_ordering(389) 00:12:18.103 fused_ordering(390) 00:12:18.103 fused_ordering(391) 00:12:18.103 fused_ordering(392) 00:12:18.103 fused_ordering(393) 00:12:18.103 fused_ordering(394) 00:12:18.103 fused_ordering(395) 00:12:18.103 fused_ordering(396) 00:12:18.103 fused_ordering(397) 00:12:18.103 fused_ordering(398) 00:12:18.103 fused_ordering(399) 00:12:18.103 fused_ordering(400) 00:12:18.103 fused_ordering(401) 00:12:18.103 fused_ordering(402) 00:12:18.103 fused_ordering(403) 00:12:18.103 fused_ordering(404) 00:12:18.103 fused_ordering(405) 00:12:18.103 fused_ordering(406) 00:12:18.103 fused_ordering(407) 00:12:18.103 fused_ordering(408) 00:12:18.103 fused_ordering(409) 00:12:18.103 fused_ordering(410) 00:12:18.669 fused_ordering(411) 00:12:18.669 fused_ordering(412) 00:12:18.669 fused_ordering(413) 00:12:18.669 fused_ordering(414) 00:12:18.669 fused_ordering(415) 00:12:18.669 fused_ordering(416) 00:12:18.669 fused_ordering(417) 00:12:18.669 fused_ordering(418) 00:12:18.669 fused_ordering(419) 00:12:18.669 fused_ordering(420) 00:12:18.669 fused_ordering(421) 00:12:18.669 fused_ordering(422) 00:12:18.669 fused_ordering(423) 00:12:18.669 fused_ordering(424) 00:12:18.669 fused_ordering(425) 00:12:18.669 fused_ordering(426) 00:12:18.669 fused_ordering(427) 00:12:18.669 fused_ordering(428) 00:12:18.669 fused_ordering(429) 00:12:18.669 fused_ordering(430) 00:12:18.669 fused_ordering(431) 00:12:18.669 fused_ordering(432) 00:12:18.669 fused_ordering(433) 00:12:18.669 fused_ordering(434) 00:12:18.669 fused_ordering(435) 00:12:18.669 fused_ordering(436) 00:12:18.669 fused_ordering(437) 00:12:18.669 fused_ordering(438) 00:12:18.669 fused_ordering(439) 00:12:18.669 fused_ordering(440) 00:12:18.669 fused_ordering(441) 00:12:18.669 fused_ordering(442) 00:12:18.669 fused_ordering(443) 00:12:18.669 fused_ordering(444) 00:12:18.669 fused_ordering(445) 00:12:18.669 fused_ordering(446) 00:12:18.669 fused_ordering(447) 00:12:18.669 fused_ordering(448) 00:12:18.669 fused_ordering(449) 00:12:18.669 fused_ordering(450) 00:12:18.669 fused_ordering(451) 00:12:18.670 fused_ordering(452) 00:12:18.670 fused_ordering(453) 00:12:18.670 fused_ordering(454) 00:12:18.670 fused_ordering(455) 00:12:18.670 fused_ordering(456) 00:12:18.670 fused_ordering(457) 00:12:18.670 fused_ordering(458) 00:12:18.670 fused_ordering(459) 00:12:18.670 fused_ordering(460) 00:12:18.670 fused_ordering(461) 00:12:18.670 fused_ordering(462) 00:12:18.670 fused_ordering(463) 00:12:18.670 fused_ordering(464) 00:12:18.670 fused_ordering(465) 00:12:18.670 fused_ordering(466) 00:12:18.670 fused_ordering(467) 00:12:18.670 fused_ordering(468) 00:12:18.670 fused_ordering(469) 00:12:18.670 fused_ordering(470) 00:12:18.670 fused_ordering(471) 00:12:18.670 fused_ordering(472) 00:12:18.670 fused_ordering(473) 00:12:18.670 fused_ordering(474) 00:12:18.670 fused_ordering(475) 00:12:18.670 fused_ordering(476) 00:12:18.670 fused_ordering(477) 00:12:18.670 fused_ordering(478) 00:12:18.670 fused_ordering(479) 00:12:18.670 fused_ordering(480) 00:12:18.670 fused_ordering(481) 00:12:18.670 fused_ordering(482) 00:12:18.670 fused_ordering(483) 00:12:18.670 fused_ordering(484) 00:12:18.670 fused_ordering(485) 00:12:18.670 fused_ordering(486) 00:12:18.670 fused_ordering(487) 00:12:18.670 fused_ordering(488) 00:12:18.670 fused_ordering(489) 00:12:18.670 fused_ordering(490) 00:12:18.670 fused_ordering(491) 00:12:18.670 fused_ordering(492) 00:12:18.670 fused_ordering(493) 00:12:18.670 fused_ordering(494) 00:12:18.670 fused_ordering(495) 00:12:18.670 fused_ordering(496) 00:12:18.670 fused_ordering(497) 00:12:18.670 fused_ordering(498) 00:12:18.670 fused_ordering(499) 00:12:18.670 fused_ordering(500) 00:12:18.670 fused_ordering(501) 00:12:18.670 fused_ordering(502) 00:12:18.670 fused_ordering(503) 00:12:18.670 fused_ordering(504) 00:12:18.670 fused_ordering(505) 00:12:18.670 fused_ordering(506) 00:12:18.670 fused_ordering(507) 00:12:18.670 fused_ordering(508) 00:12:18.670 fused_ordering(509) 00:12:18.670 fused_ordering(510) 00:12:18.670 fused_ordering(511) 00:12:18.670 fused_ordering(512) 00:12:18.670 fused_ordering(513) 00:12:18.670 fused_ordering(514) 00:12:18.670 fused_ordering(515) 00:12:18.670 fused_ordering(516) 00:12:18.670 fused_ordering(517) 00:12:18.670 fused_ordering(518) 00:12:18.670 fused_ordering(519) 00:12:18.670 fused_ordering(520) 00:12:18.670 fused_ordering(521) 00:12:18.670 fused_ordering(522) 00:12:18.670 fused_ordering(523) 00:12:18.670 fused_ordering(524) 00:12:18.670 fused_ordering(525) 00:12:18.670 fused_ordering(526) 00:12:18.670 fused_ordering(527) 00:12:18.670 fused_ordering(528) 00:12:18.670 fused_ordering(529) 00:12:18.670 fused_ordering(530) 00:12:18.670 fused_ordering(531) 00:12:18.670 fused_ordering(532) 00:12:18.670 fused_ordering(533) 00:12:18.670 fused_ordering(534) 00:12:18.670 fused_ordering(535) 00:12:18.670 fused_ordering(536) 00:12:18.670 fused_ordering(537) 00:12:18.670 fused_ordering(538) 00:12:18.670 fused_ordering(539) 00:12:18.670 fused_ordering(540) 00:12:18.670 fused_ordering(541) 00:12:18.670 fused_ordering(542) 00:12:18.670 fused_ordering(543) 00:12:18.670 fused_ordering(544) 00:12:18.670 fused_ordering(545) 00:12:18.670 fused_ordering(546) 00:12:18.670 fused_ordering(547) 00:12:18.670 fused_ordering(548) 00:12:18.670 fused_ordering(549) 00:12:18.670 fused_ordering(550) 00:12:18.670 fused_ordering(551) 00:12:18.670 fused_ordering(552) 00:12:18.670 fused_ordering(553) 00:12:18.670 fused_ordering(554) 00:12:18.670 fused_ordering(555) 00:12:18.670 fused_ordering(556) 00:12:18.670 fused_ordering(557) 00:12:18.670 fused_ordering(558) 00:12:18.670 fused_ordering(559) 00:12:18.670 fused_ordering(560) 00:12:18.670 fused_ordering(561) 00:12:18.670 fused_ordering(562) 00:12:18.670 fused_ordering(563) 00:12:18.670 fused_ordering(564) 00:12:18.670 fused_ordering(565) 00:12:18.670 fused_ordering(566) 00:12:18.670 fused_ordering(567) 00:12:18.670 fused_ordering(568) 00:12:18.670 fused_ordering(569) 00:12:18.670 fused_ordering(570) 00:12:18.670 fused_ordering(571) 00:12:18.670 fused_ordering(572) 00:12:18.670 fused_ordering(573) 00:12:18.670 fused_ordering(574) 00:12:18.670 fused_ordering(575) 00:12:18.670 fused_ordering(576) 00:12:18.670 fused_ordering(577) 00:12:18.670 fused_ordering(578) 00:12:18.670 fused_ordering(579) 00:12:18.670 fused_ordering(580) 00:12:18.670 fused_ordering(581) 00:12:18.670 fused_ordering(582) 00:12:18.670 fused_ordering(583) 00:12:18.670 fused_ordering(584) 00:12:18.670 fused_ordering(585) 00:12:18.670 fused_ordering(586) 00:12:18.670 fused_ordering(587) 00:12:18.670 fused_ordering(588) 00:12:18.670 fused_ordering(589) 00:12:18.670 fused_ordering(590) 00:12:18.670 fused_ordering(591) 00:12:18.670 fused_ordering(592) 00:12:18.670 fused_ordering(593) 00:12:18.670 fused_ordering(594) 00:12:18.670 fused_ordering(595) 00:12:18.670 fused_ordering(596) 00:12:18.670 fused_ordering(597) 00:12:18.670 fused_ordering(598) 00:12:18.670 fused_ordering(599) 00:12:18.670 fused_ordering(600) 00:12:18.670 fused_ordering(601) 00:12:18.670 fused_ordering(602) 00:12:18.670 fused_ordering(603) 00:12:18.670 fused_ordering(604) 00:12:18.670 fused_ordering(605) 00:12:18.670 fused_ordering(606) 00:12:18.670 fused_ordering(607) 00:12:18.670 fused_ordering(608) 00:12:18.670 fused_ordering(609) 00:12:18.670 fused_ordering(610) 00:12:18.670 fused_ordering(611) 00:12:18.670 fused_ordering(612) 00:12:18.670 fused_ordering(613) 00:12:18.670 fused_ordering(614) 00:12:18.670 fused_ordering(615) 00:12:19.237 fused_ordering(616) 00:12:19.237 fused_ordering(617) 00:12:19.237 fused_ordering(618) 00:12:19.237 fused_ordering(619) 00:12:19.237 fused_ordering(620) 00:12:19.237 fused_ordering(621) 00:12:19.237 fused_ordering(622) 00:12:19.237 fused_ordering(623) 00:12:19.237 fused_ordering(624) 00:12:19.237 fused_ordering(625) 00:12:19.237 fused_ordering(626) 00:12:19.237 fused_ordering(627) 00:12:19.237 fused_ordering(628) 00:12:19.237 fused_ordering(629) 00:12:19.237 fused_ordering(630) 00:12:19.237 fused_ordering(631) 00:12:19.237 fused_ordering(632) 00:12:19.237 fused_ordering(633) 00:12:19.237 fused_ordering(634) 00:12:19.237 fused_ordering(635) 00:12:19.237 fused_ordering(636) 00:12:19.237 fused_ordering(637) 00:12:19.237 fused_ordering(638) 00:12:19.237 fused_ordering(639) 00:12:19.237 fused_ordering(640) 00:12:19.237 fused_ordering(641) 00:12:19.237 fused_ordering(642) 00:12:19.237 fused_ordering(643) 00:12:19.237 fused_ordering(644) 00:12:19.237 fused_ordering(645) 00:12:19.237 fused_ordering(646) 00:12:19.237 fused_ordering(647) 00:12:19.237 fused_ordering(648) 00:12:19.237 fused_ordering(649) 00:12:19.237 fused_ordering(650) 00:12:19.237 fused_ordering(651) 00:12:19.237 fused_ordering(652) 00:12:19.237 fused_ordering(653) 00:12:19.237 fused_ordering(654) 00:12:19.237 fused_ordering(655) 00:12:19.237 fused_ordering(656) 00:12:19.237 fused_ordering(657) 00:12:19.237 fused_ordering(658) 00:12:19.237 fused_ordering(659) 00:12:19.237 fused_ordering(660) 00:12:19.237 fused_ordering(661) 00:12:19.237 fused_ordering(662) 00:12:19.237 fused_ordering(663) 00:12:19.237 fused_ordering(664) 00:12:19.237 fused_ordering(665) 00:12:19.237 fused_ordering(666) 00:12:19.237 fused_ordering(667) 00:12:19.237 fused_ordering(668) 00:12:19.237 fused_ordering(669) 00:12:19.237 fused_ordering(670) 00:12:19.237 fused_ordering(671) 00:12:19.237 fused_ordering(672) 00:12:19.237 fused_ordering(673) 00:12:19.237 fused_ordering(674) 00:12:19.237 fused_ordering(675) 00:12:19.237 fused_ordering(676) 00:12:19.237 fused_ordering(677) 00:12:19.237 fused_ordering(678) 00:12:19.237 fused_ordering(679) 00:12:19.237 fused_ordering(680) 00:12:19.237 fused_ordering(681) 00:12:19.237 fused_ordering(682) 00:12:19.237 fused_ordering(683) 00:12:19.237 fused_ordering(684) 00:12:19.237 fused_ordering(685) 00:12:19.237 fused_ordering(686) 00:12:19.237 fused_ordering(687) 00:12:19.237 fused_ordering(688) 00:12:19.237 fused_ordering(689) 00:12:19.237 fused_ordering(690) 00:12:19.237 fused_ordering(691) 00:12:19.237 fused_ordering(692) 00:12:19.237 fused_ordering(693) 00:12:19.237 fused_ordering(694) 00:12:19.237 fused_ordering(695) 00:12:19.237 fused_ordering(696) 00:12:19.237 fused_ordering(697) 00:12:19.237 fused_ordering(698) 00:12:19.237 fused_ordering(699) 00:12:19.237 fused_ordering(700) 00:12:19.237 fused_ordering(701) 00:12:19.237 fused_ordering(702) 00:12:19.237 fused_ordering(703) 00:12:19.237 fused_ordering(704) 00:12:19.237 fused_ordering(705) 00:12:19.237 fused_ordering(706) 00:12:19.237 fused_ordering(707) 00:12:19.237 fused_ordering(708) 00:12:19.237 fused_ordering(709) 00:12:19.237 fused_ordering(710) 00:12:19.237 fused_ordering(711) 00:12:19.237 fused_ordering(712) 00:12:19.237 fused_ordering(713) 00:12:19.237 fused_ordering(714) 00:12:19.237 fused_ordering(715) 00:12:19.237 fused_ordering(716) 00:12:19.237 fused_ordering(717) 00:12:19.237 fused_ordering(718) 00:12:19.237 fused_ordering(719) 00:12:19.237 fused_ordering(720) 00:12:19.237 fused_ordering(721) 00:12:19.237 fused_ordering(722) 00:12:19.237 fused_ordering(723) 00:12:19.237 fused_ordering(724) 00:12:19.237 fused_ordering(725) 00:12:19.237 fused_ordering(726) 00:12:19.237 fused_ordering(727) 00:12:19.237 fused_ordering(728) 00:12:19.237 fused_ordering(729) 00:12:19.237 fused_ordering(730) 00:12:19.237 fused_ordering(731) 00:12:19.237 fused_ordering(732) 00:12:19.237 fused_ordering(733) 00:12:19.237 fused_ordering(734) 00:12:19.237 fused_ordering(735) 00:12:19.237 fused_ordering(736) 00:12:19.237 fused_ordering(737) 00:12:19.237 fused_ordering(738) 00:12:19.237 fused_ordering(739) 00:12:19.237 fused_ordering(740) 00:12:19.237 fused_ordering(741) 00:12:19.237 fused_ordering(742) 00:12:19.237 fused_ordering(743) 00:12:19.237 fused_ordering(744) 00:12:19.237 fused_ordering(745) 00:12:19.237 fused_ordering(746) 00:12:19.237 fused_ordering(747) 00:12:19.237 fused_ordering(748) 00:12:19.237 fused_ordering(749) 00:12:19.237 fused_ordering(750) 00:12:19.237 fused_ordering(751) 00:12:19.237 fused_ordering(752) 00:12:19.237 fused_ordering(753) 00:12:19.237 fused_ordering(754) 00:12:19.237 fused_ordering(755) 00:12:19.237 fused_ordering(756) 00:12:19.237 fused_ordering(757) 00:12:19.237 fused_ordering(758) 00:12:19.237 fused_ordering(759) 00:12:19.237 fused_ordering(760) 00:12:19.237 fused_ordering(761) 00:12:19.237 fused_ordering(762) 00:12:19.237 fused_ordering(763) 00:12:19.237 fused_ordering(764) 00:12:19.237 fused_ordering(765) 00:12:19.237 fused_ordering(766) 00:12:19.237 fused_ordering(767) 00:12:19.237 fused_ordering(768) 00:12:19.237 fused_ordering(769) 00:12:19.237 fused_ordering(770) 00:12:19.237 fused_ordering(771) 00:12:19.237 fused_ordering(772) 00:12:19.237 fused_ordering(773) 00:12:19.237 fused_ordering(774) 00:12:19.237 fused_ordering(775) 00:12:19.237 fused_ordering(776) 00:12:19.237 fused_ordering(777) 00:12:19.237 fused_ordering(778) 00:12:19.237 fused_ordering(779) 00:12:19.237 fused_ordering(780) 00:12:19.237 fused_ordering(781) 00:12:19.237 fused_ordering(782) 00:12:19.237 fused_ordering(783) 00:12:19.237 fused_ordering(784) 00:12:19.237 fused_ordering(785) 00:12:19.237 fused_ordering(786) 00:12:19.237 fused_ordering(787) 00:12:19.237 fused_ordering(788) 00:12:19.237 fused_ordering(789) 00:12:19.237 fused_ordering(790) 00:12:19.237 fused_ordering(791) 00:12:19.237 fused_ordering(792) 00:12:19.237 fused_ordering(793) 00:12:19.237 fused_ordering(794) 00:12:19.237 fused_ordering(795) 00:12:19.237 fused_ordering(796) 00:12:19.237 fused_ordering(797) 00:12:19.237 fused_ordering(798) 00:12:19.237 fused_ordering(799) 00:12:19.237 fused_ordering(800) 00:12:19.237 fused_ordering(801) 00:12:19.237 fused_ordering(802) 00:12:19.237 fused_ordering(803) 00:12:19.237 fused_ordering(804) 00:12:19.237 fused_ordering(805) 00:12:19.237 fused_ordering(806) 00:12:19.237 fused_ordering(807) 00:12:19.237 fused_ordering(808) 00:12:19.237 fused_ordering(809) 00:12:19.237 fused_ordering(810) 00:12:19.237 fused_ordering(811) 00:12:19.237 fused_ordering(812) 00:12:19.237 fused_ordering(813) 00:12:19.237 fused_ordering(814) 00:12:19.237 fused_ordering(815) 00:12:19.237 fused_ordering(816) 00:12:19.237 fused_ordering(817) 00:12:19.237 fused_ordering(818) 00:12:19.237 fused_ordering(819) 00:12:19.237 fused_ordering(820) 00:12:19.805 fused_ordering(821) 00:12:19.805 fused_ordering(822) 00:12:19.805 fused_ordering(823) 00:12:19.805 fused_ordering(824) 00:12:19.805 fused_ordering(825) 00:12:19.805 fused_ordering(826) 00:12:19.805 fused_ordering(827) 00:12:19.805 fused_ordering(828) 00:12:19.805 fused_ordering(829) 00:12:19.805 fused_ordering(830) 00:12:19.805 fused_ordering(831) 00:12:19.805 fused_ordering(832) 00:12:19.805 fused_ordering(833) 00:12:19.805 fused_ordering(834) 00:12:19.805 fused_ordering(835) 00:12:19.805 fused_ordering(836) 00:12:19.805 fused_ordering(837) 00:12:19.805 fused_ordering(838) 00:12:19.805 fused_ordering(839) 00:12:19.805 fused_ordering(840) 00:12:19.805 fused_ordering(841) 00:12:19.805 fused_ordering(842) 00:12:19.805 fused_ordering(843) 00:12:19.805 fused_ordering(844) 00:12:19.805 fused_ordering(845) 00:12:19.805 fused_ordering(846) 00:12:19.805 fused_ordering(847) 00:12:19.805 fused_ordering(848) 00:12:19.805 fused_ordering(849) 00:12:19.805 fused_ordering(850) 00:12:19.805 fused_ordering(851) 00:12:19.805 fused_ordering(852) 00:12:19.805 fused_ordering(853) 00:12:19.805 fused_ordering(854) 00:12:19.805 fused_ordering(855) 00:12:19.805 fused_ordering(856) 00:12:19.805 fused_ordering(857) 00:12:19.805 fused_ordering(858) 00:12:19.805 fused_ordering(859) 00:12:19.805 fused_ordering(860) 00:12:19.805 fused_ordering(861) 00:12:19.805 fused_ordering(862) 00:12:19.805 fused_ordering(863) 00:12:19.805 fused_ordering(864) 00:12:19.805 fused_ordering(865) 00:12:19.805 fused_ordering(866) 00:12:19.805 fused_ordering(867) 00:12:19.805 fused_ordering(868) 00:12:19.805 fused_ordering(869) 00:12:19.805 fused_ordering(870) 00:12:19.805 fused_ordering(871) 00:12:19.805 fused_ordering(872) 00:12:19.805 fused_ordering(873) 00:12:19.805 fused_ordering(874) 00:12:19.805 fused_ordering(875) 00:12:19.805 fused_ordering(876) 00:12:19.805 fused_ordering(877) 00:12:19.805 fused_ordering(878) 00:12:19.805 fused_ordering(879) 00:12:19.805 fused_ordering(880) 00:12:19.805 fused_ordering(881) 00:12:19.805 fused_ordering(882) 00:12:19.805 fused_ordering(883) 00:12:19.805 fused_ordering(884) 00:12:19.805 fused_ordering(885) 00:12:19.805 fused_ordering(886) 00:12:19.805 fused_ordering(887) 00:12:19.805 fused_ordering(888) 00:12:19.805 fused_ordering(889) 00:12:19.805 fused_ordering(890) 00:12:19.805 fused_ordering(891) 00:12:19.805 fused_ordering(892) 00:12:19.805 fused_ordering(893) 00:12:19.805 fused_ordering(894) 00:12:19.805 fused_ordering(895) 00:12:19.805 fused_ordering(896) 00:12:19.805 fused_ordering(897) 00:12:19.805 fused_ordering(898) 00:12:19.805 fused_ordering(899) 00:12:19.805 fused_ordering(900) 00:12:19.805 fused_ordering(901) 00:12:19.805 fused_ordering(902) 00:12:19.805 fused_ordering(903) 00:12:19.805 fused_ordering(904) 00:12:19.805 fused_ordering(905) 00:12:19.805 fused_ordering(906) 00:12:19.805 fused_ordering(907) 00:12:19.805 fused_ordering(908) 00:12:19.805 fused_ordering(909) 00:12:19.805 fused_ordering(910) 00:12:19.805 fused_ordering(911) 00:12:19.805 fused_ordering(912) 00:12:19.805 fused_ordering(913) 00:12:19.805 fused_ordering(914) 00:12:19.805 fused_ordering(915) 00:12:19.805 fused_ordering(916) 00:12:19.805 fused_ordering(917) 00:12:19.805 fused_ordering(918) 00:12:19.805 fused_ordering(919) 00:12:19.805 fused_ordering(920) 00:12:19.805 fused_ordering(921) 00:12:19.805 fused_ordering(922) 00:12:19.805 fused_ordering(923) 00:12:19.805 fused_ordering(924) 00:12:19.805 fused_ordering(925) 00:12:19.805 fused_ordering(926) 00:12:19.805 fused_ordering(927) 00:12:19.805 fused_ordering(928) 00:12:19.805 fused_ordering(929) 00:12:19.805 fused_ordering(930) 00:12:19.805 fused_ordering(931) 00:12:19.805 fused_ordering(932) 00:12:19.805 fused_ordering(933) 00:12:19.805 fused_ordering(934) 00:12:19.805 fused_ordering(935) 00:12:19.805 fused_ordering(936) 00:12:19.805 fused_ordering(937) 00:12:19.805 fused_ordering(938) 00:12:19.805 fused_ordering(939) 00:12:19.805 fused_ordering(940) 00:12:19.805 fused_ordering(941) 00:12:19.805 fused_ordering(942) 00:12:19.805 fused_ordering(943) 00:12:19.806 fused_ordering(944) 00:12:19.806 fused_ordering(945) 00:12:19.806 fused_ordering(946) 00:12:19.806 fused_ordering(947) 00:12:19.806 fused_ordering(948) 00:12:19.806 fused_ordering(949) 00:12:19.806 fused_ordering(950) 00:12:19.806 fused_ordering(951) 00:12:19.806 fused_ordering(952) 00:12:19.806 fused_ordering(953) 00:12:19.806 fused_ordering(954) 00:12:19.806 fused_ordering(955) 00:12:19.806 fused_ordering(956) 00:12:19.806 fused_ordering(957) 00:12:19.806 fused_ordering(958) 00:12:19.806 fused_ordering(959) 00:12:19.806 fused_ordering(960) 00:12:19.806 fused_ordering(961) 00:12:19.806 fused_ordering(962) 00:12:19.806 fused_ordering(963) 00:12:19.806 fused_ordering(964) 00:12:19.806 fused_ordering(965) 00:12:19.806 fused_ordering(966) 00:12:19.806 fused_ordering(967) 00:12:19.806 fused_ordering(968) 00:12:19.806 fused_ordering(969) 00:12:19.806 fused_ordering(970) 00:12:19.806 fused_ordering(971) 00:12:19.806 fused_ordering(972) 00:12:19.806 fused_ordering(973) 00:12:19.806 fused_ordering(974) 00:12:19.806 fused_ordering(975) 00:12:19.806 fused_ordering(976) 00:12:19.806 fused_ordering(977) 00:12:19.806 fused_ordering(978) 00:12:19.806 fused_ordering(979) 00:12:19.806 fused_ordering(980) 00:12:19.806 fused_ordering(981) 00:12:19.806 fused_ordering(982) 00:12:19.806 fused_ordering(983) 00:12:19.806 fused_ordering(984) 00:12:19.806 fused_ordering(985) 00:12:19.806 fused_ordering(986) 00:12:19.806 fused_ordering(987) 00:12:19.806 fused_ordering(988) 00:12:19.806 fused_ordering(989) 00:12:19.806 fused_ordering(990) 00:12:19.806 fused_ordering(991) 00:12:19.806 fused_ordering(992) 00:12:19.806 fused_ordering(993) 00:12:19.806 fused_ordering(994) 00:12:19.806 fused_ordering(995) 00:12:19.806 fused_ordering(996) 00:12:19.806 fused_ordering(997) 00:12:19.806 fused_ordering(998) 00:12:19.806 fused_ordering(999) 00:12:19.806 fused_ordering(1000) 00:12:19.806 fused_ordering(1001) 00:12:19.806 fused_ordering(1002) 00:12:19.806 fused_ordering(1003) 00:12:19.806 fused_ordering(1004) 00:12:19.806 fused_ordering(1005) 00:12:19.806 fused_ordering(1006) 00:12:19.806 fused_ordering(1007) 00:12:19.806 fused_ordering(1008) 00:12:19.806 fused_ordering(1009) 00:12:19.806 fused_ordering(1010) 00:12:19.806 fused_ordering(1011) 00:12:19.806 fused_ordering(1012) 00:12:19.806 fused_ordering(1013) 00:12:19.806 fused_ordering(1014) 00:12:19.806 fused_ordering(1015) 00:12:19.806 fused_ordering(1016) 00:12:19.806 fused_ordering(1017) 00:12:19.806 fused_ordering(1018) 00:12:19.806 fused_ordering(1019) 00:12:19.806 fused_ordering(1020) 00:12:19.806 fused_ordering(1021) 00:12:19.806 fused_ordering(1022) 00:12:19.806 fused_ordering(1023) 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:19.806 rmmod nvme_tcp 00:12:19.806 rmmod nvme_fabrics 00:12:19.806 rmmod nvme_keyring 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 987859 ']' 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 987859 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 987859 ']' 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 987859 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.806 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 987859 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 987859' 00:12:20.065 killing process with pid 987859 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 987859 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 987859 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.065 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:22.607 00:12:22.607 real 0m7.691s 00:12:22.607 user 0m5.095s 00:12:22.607 sys 0m3.258s 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:22.607 ************************************ 00:12:22.607 END TEST nvmf_fused_ordering 00:12:22.607 ************************************ 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:22.607 ************************************ 00:12:22.607 START TEST nvmf_ns_masking 00:12:22.607 ************************************ 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:22.607 * Looking for test storage... 00:12:22.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.607 --rc genhtml_branch_coverage=1 00:12:22.607 --rc genhtml_function_coverage=1 00:12:22.607 --rc genhtml_legend=1 00:12:22.607 --rc geninfo_all_blocks=1 00:12:22.607 --rc geninfo_unexecuted_blocks=1 00:12:22.607 00:12:22.607 ' 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.607 --rc genhtml_branch_coverage=1 00:12:22.607 --rc genhtml_function_coverage=1 00:12:22.607 --rc genhtml_legend=1 00:12:22.607 --rc geninfo_all_blocks=1 00:12:22.607 --rc geninfo_unexecuted_blocks=1 00:12:22.607 00:12:22.607 ' 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.607 --rc genhtml_branch_coverage=1 00:12:22.607 --rc genhtml_function_coverage=1 00:12:22.607 --rc genhtml_legend=1 00:12:22.607 --rc geninfo_all_blocks=1 00:12:22.607 --rc geninfo_unexecuted_blocks=1 00:12:22.607 00:12:22.607 ' 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.607 --rc genhtml_branch_coverage=1 00:12:22.607 --rc genhtml_function_coverage=1 00:12:22.607 --rc genhtml_legend=1 00:12:22.607 --rc geninfo_all_blocks=1 00:12:22.607 --rc geninfo_unexecuted_blocks=1 00:12:22.607 00:12:22.607 ' 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.607 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:22.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f9ce3bd9-4bce-49a0-81c8-8a74d8352f27 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=309af102-eca6-4ba9-9c1d-5f5d6b686c3d 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=83442052-ce18-4c39-8f9d-fdcf6acdd591 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:22.608 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:24.516 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:24.516 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:24.516 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:24.516 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:24.516 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:24.517 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.517 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:24.517 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:24.517 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:24.517 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:24.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:12:24.777 00:12:24.777 --- 10.0.0.2 ping statistics --- 00:12:24.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.777 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:24.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:12:24.777 00:12:24.777 --- 10.0.0.1 ping statistics --- 00:12:24.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.777 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:24.777 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:24.778 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:24.778 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:24.778 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:24.778 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:24.778 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=990210 00:12:24.778 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:24.778 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 990210 00:12:24.778 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 990210 ']' 00:12:24.778 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.778 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.778 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.778 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.778 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:24.778 [2024-11-15 12:34:05.059461] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:12:24.778 [2024-11-15 12:34:05.059551] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.036 [2024-11-15 12:34:05.142095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.036 [2024-11-15 12:34:05.200245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.036 [2024-11-15 12:34:05.200309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.036 [2024-11-15 12:34:05.200338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.036 [2024-11-15 12:34:05.200350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.036 [2024-11-15 12:34:05.200360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.036 [2024-11-15 12:34:05.201027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.036 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.036 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:25.036 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:25.036 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:25.036 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:25.036 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.036 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:25.602 [2024-11-15 12:34:05.652822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.602 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:25.602 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:25.602 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:25.861 Malloc1 00:12:25.861 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:26.119 Malloc2 00:12:26.120 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:26.377 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:26.943 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.944 [2024-11-15 12:34:07.249521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.944 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:26.944 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 83442052-ce18-4c39-8f9d-fdcf6acdd591 -a 10.0.0.2 -s 4420 -i 4 00:12:27.201 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.201 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:27.201 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.201 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:27.202 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:29.102 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:29.102 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:29.102 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.102 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:29.102 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.102 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:29.102 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:29.102 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:29.360 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:29.360 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:29.360 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:29.360 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:29.360 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:29.360 [ 0]:0x1 00:12:29.360 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:29.360 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:29.360 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4965aab200cb411f88f34e23865d2006 00:12:29.360 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4965aab200cb411f88f34e23865d2006 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:29.360 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:29.619 [ 0]:0x1 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4965aab200cb411f88f34e23865d2006 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4965aab200cb411f88f34e23865d2006 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:29.619 [ 1]:0x2 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5143e5c14dc645f480fd62d144c001aa 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5143e5c14dc645f480fd62d144c001aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:29.619 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.877 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.444 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:30.702 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:30.702 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 83442052-ce18-4c39-8f9d-fdcf6acdd591 -a 10.0.0.2 -s 4420 -i 4 00:12:30.702 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:30.702 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:30.702 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.702 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:30.702 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:30.702 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:33.226 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:33.227 [ 0]:0x2 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5143e5c14dc645f480fd62d144c001aa 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5143e5c14dc645f480fd62d144c001aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:33.227 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:33.485 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:33.485 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:33.485 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:33.485 [ 0]:0x1 00:12:33.485 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:33.485 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:33.485 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4965aab200cb411f88f34e23865d2006 00:12:33.485 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4965aab200cb411f88f34e23865d2006 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:33.485 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:33.485 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:33.485 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:33.485 [ 1]:0x2 00:12:33.485 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:33.485 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:33.485 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5143e5c14dc645f480fd62d144c001aa 00:12:33.485 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5143e5c14dc645f480fd62d144c001aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:33.485 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:33.744 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:33.744 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:33.744 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:33.744 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:33.744 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:33.744 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:33.744 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:33.744 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:33.744 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:33.744 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:33.744 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:33.744 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:33.744 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:33.744 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:33.744 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:33.744 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:33.744 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:33.744 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:33.744 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:33.744 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:33.744 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:33.744 [ 0]:0x2 00:12:33.744 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:33.744 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:33.744 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5143e5c14dc645f480fd62d144c001aa 00:12:33.744 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5143e5c14dc645f480fd62d144c001aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:33.744 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:33.744 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.001 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:34.258 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:34.259 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 83442052-ce18-4c39-8f9d-fdcf6acdd591 -a 10.0.0.2 -s 4420 -i 4 00:12:34.519 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:34.519 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:34.519 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.519 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:34.519 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:34.519 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:36.419 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:36.419 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:36.419 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.419 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:36.419 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.419 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:36.419 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:36.419 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:36.419 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:36.419 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:36.419 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:36.419 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.419 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:36.419 [ 0]:0x1 00:12:36.419 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:36.419 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.677 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4965aab200cb411f88f34e23865d2006 00:12:36.677 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4965aab200cb411f88f34e23865d2006 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.677 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:36.678 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.678 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:36.678 [ 1]:0x2 00:12:36.678 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:36.678 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.678 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5143e5c14dc645f480fd62d144c001aa 00:12:36.678 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5143e5c14dc645f480fd62d144c001aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.678 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:36.936 [ 0]:0x2 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5143e5c14dc645f480fd62d144c001aa 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5143e5c14dc645f480fd62d144c001aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:36.936 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:37.503 [2024-11-15 12:34:17.540163] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:37.503 request: 00:12:37.503 { 00:12:37.503 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.503 "nsid": 2, 00:12:37.503 "host": "nqn.2016-06.io.spdk:host1", 00:12:37.503 "method": "nvmf_ns_remove_host", 00:12:37.503 "req_id": 1 00:12:37.503 } 00:12:37.503 Got JSON-RPC error response 00:12:37.503 response: 00:12:37.503 { 00:12:37.503 "code": -32602, 00:12:37.503 "message": "Invalid parameters" 00:12:37.503 } 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:37.503 [ 0]:0x2 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5143e5c14dc645f480fd62d144c001aa 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5143e5c14dc645f480fd62d144c001aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=991833 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 991833 /var/tmp/host.sock 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 991833 ']' 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:37.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.503 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:37.503 [2024-11-15 12:34:17.749653] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:12:37.503 [2024-11-15 12:34:17.749756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991833 ] 00:12:37.503 [2024-11-15 12:34:17.816323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.762 [2024-11-15 12:34:17.874324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.020 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.020 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:38.020 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.279 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:38.537 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f9ce3bd9-4bce-49a0-81c8-8a74d8352f27 00:12:38.537 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:38.537 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F9CE3BD94BCE49A081C88A74D8352F27 -i 00:12:38.795 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 309af102-eca6-4ba9-9c1d-5f5d6b686c3d 00:12:38.795 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:38.795 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 309AF102ECA64BA99C1D5F5D6B686C3D -i 00:12:39.053 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:39.311 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:39.569 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:39.569 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:39.827 nvme0n1 00:12:39.827 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:39.827 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:40.393 nvme1n2 00:12:40.393 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:40.393 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:40.393 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:40.393 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:40.393 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:40.652 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:40.652 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:40.652 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:40.652 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:40.911 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f9ce3bd9-4bce-49a0-81c8-8a74d8352f27 == \f\9\c\e\3\b\d\9\-\4\b\c\e\-\4\9\a\0\-\8\1\c\8\-\8\a\7\4\d\8\3\5\2\f\2\7 ]] 00:12:40.911 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:40.911 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:40.911 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:41.169 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 309af102-eca6-4ba9-9c1d-5f5d6b686c3d == \3\0\9\a\f\1\0\2\-\e\c\a\6\-\4\b\a\9\-\9\c\1\d\-\5\f\5\d\6\b\6\8\6\c\3\d ]] 00:12:41.169 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.427 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:41.685 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid f9ce3bd9-4bce-49a0-81c8-8a74d8352f27 00:12:41.685 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:41.685 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F9CE3BD94BCE49A081C88A74D8352F27 00:12:41.685 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:41.685 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F9CE3BD94BCE49A081C88A74D8352F27 00:12:41.685 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:41.685 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.685 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:41.685 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.685 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:41.685 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.685 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:41.685 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:41.685 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F9CE3BD94BCE49A081C88A74D8352F27 00:12:41.944 [2024-11-15 12:34:22.225742] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:41.944 [2024-11-15 12:34:22.225791] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:41.944 [2024-11-15 12:34:22.225823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.944 request: 00:12:41.944 { 00:12:41.944 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.944 "namespace": { 00:12:41.944 "bdev_name": "invalid", 00:12:41.944 "nsid": 1, 00:12:41.944 "nguid": "F9CE3BD94BCE49A081C88A74D8352F27", 00:12:41.944 "no_auto_visible": false 00:12:41.944 }, 00:12:41.944 "method": "nvmf_subsystem_add_ns", 00:12:41.944 "req_id": 1 00:12:41.944 } 00:12:41.944 Got JSON-RPC error response 00:12:41.944 response: 00:12:41.944 { 00:12:41.944 "code": -32602, 00:12:41.944 "message": "Invalid parameters" 00:12:41.944 } 00:12:41.944 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:41.944 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:41.944 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:41.944 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:41.944 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid f9ce3bd9-4bce-49a0-81c8-8a74d8352f27 00:12:41.944 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:41.944 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F9CE3BD94BCE49A081C88A74D8352F27 -i 00:12:42.202 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:44.730 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:44.730 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:44.730 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:44.730 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:44.730 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 991833 00:12:44.730 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 991833 ']' 00:12:44.730 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 991833 00:12:44.730 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:44.730 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.730 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 991833 00:12:44.730 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:44.730 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:44.730 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 991833' 00:12:44.730 killing process with pid 991833 00:12:44.730 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 991833 00:12:44.730 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 991833 00:12:44.988 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.246 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:45.246 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:45.246 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:45.246 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:45.246 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:45.246 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:45.246 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.246 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:45.246 rmmod nvme_tcp 00:12:45.505 rmmod nvme_fabrics 00:12:45.505 rmmod nvme_keyring 00:12:45.505 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.505 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:45.505 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:45.505 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 990210 ']' 00:12:45.505 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 990210 00:12:45.505 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 990210 ']' 00:12:45.505 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 990210 00:12:45.505 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:45.505 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.505 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 990210 00:12:45.505 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.505 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.505 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 990210' 00:12:45.505 killing process with pid 990210 00:12:45.505 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 990210 00:12:45.505 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 990210 00:12:45.763 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:45.763 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:45.763 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:45.763 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:45.763 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:45.763 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:45.763 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:45.763 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:45.763 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:45.763 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.763 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.763 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.672 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:47.672 00:12:47.672 real 0m25.513s 00:12:47.672 user 0m36.938s 00:12:47.672 sys 0m4.853s 00:12:47.672 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.672 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:47.672 ************************************ 00:12:47.672 END TEST nvmf_ns_masking 00:12:47.672 ************************************ 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.932 ************************************ 00:12:47.932 START TEST nvmf_nvme_cli 00:12:47.932 ************************************ 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:47.932 * Looking for test storage... 00:12:47.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:47.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.932 --rc genhtml_branch_coverage=1 00:12:47.932 --rc genhtml_function_coverage=1 00:12:47.932 --rc genhtml_legend=1 00:12:47.932 --rc geninfo_all_blocks=1 00:12:47.932 --rc geninfo_unexecuted_blocks=1 00:12:47.932 00:12:47.932 ' 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:47.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.932 --rc genhtml_branch_coverage=1 00:12:47.932 --rc genhtml_function_coverage=1 00:12:47.932 --rc genhtml_legend=1 00:12:47.932 --rc geninfo_all_blocks=1 00:12:47.932 --rc geninfo_unexecuted_blocks=1 00:12:47.932 00:12:47.932 ' 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:47.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.932 --rc genhtml_branch_coverage=1 00:12:47.932 --rc genhtml_function_coverage=1 00:12:47.932 --rc genhtml_legend=1 00:12:47.932 --rc geninfo_all_blocks=1 00:12:47.932 --rc geninfo_unexecuted_blocks=1 00:12:47.932 00:12:47.932 ' 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:47.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.932 --rc genhtml_branch_coverage=1 00:12:47.932 --rc genhtml_function_coverage=1 00:12:47.932 --rc genhtml_legend=1 00:12:47.932 --rc geninfo_all_blocks=1 00:12:47.932 --rc geninfo_unexecuted_blocks=1 00:12:47.932 00:12:47.932 ' 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.932 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:47.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:47.933 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:50.468 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:50.468 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.468 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:50.469 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:50.469 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:50.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:12:50.469 00:12:50.469 --- 10.0.0.2 ping statistics --- 00:12:50.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.469 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:12:50.469 00:12:50.469 --- 10.0.0.1 ping statistics --- 00:12:50.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.469 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=994753 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 994753 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 994753 ']' 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.469 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.469 [2024-11-15 12:34:30.586231] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:12:50.469 [2024-11-15 12:34:30.586304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.469 [2024-11-15 12:34:30.655885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.469 [2024-11-15 12:34:30.718264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.469 [2024-11-15 12:34:30.718338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.469 [2024-11-15 12:34:30.718367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.469 [2024-11-15 12:34:30.718379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.469 [2024-11-15 12:34:30.718389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.470 [2024-11-15 12:34:30.720079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.470 [2024-11-15 12:34:30.720144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.470 [2024-11-15 12:34:30.720198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.470 [2024-11-15 12:34:30.720202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.728 [2024-11-15 12:34:30.870654] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.728 Malloc0 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.728 Malloc1 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.728 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.729 [2024-11-15 12:34:30.974829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.729 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:50.987 00:12:50.987 Discovery Log Number of Records 2, Generation counter 2 00:12:50.987 =====Discovery Log Entry 0====== 00:12:50.987 trtype: tcp 00:12:50.987 adrfam: ipv4 00:12:50.987 subtype: current discovery subsystem 00:12:50.987 treq: not required 00:12:50.987 portid: 0 00:12:50.987 trsvcid: 4420 00:12:50.987 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:50.987 traddr: 10.0.0.2 00:12:50.987 eflags: explicit discovery connections, duplicate discovery information 00:12:50.987 sectype: none 00:12:50.987 =====Discovery Log Entry 1====== 00:12:50.987 trtype: tcp 00:12:50.987 adrfam: ipv4 00:12:50.987 subtype: nvme subsystem 00:12:50.987 treq: not required 00:12:50.987 portid: 0 00:12:50.987 trsvcid: 4420 00:12:50.987 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:50.987 traddr: 10.0.0.2 00:12:50.987 eflags: none 00:12:50.987 sectype: none 00:12:50.987 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:50.987 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:50.987 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:50.987 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:50.987 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:50.987 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:50.987 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:50.987 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:50.987 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:50.987 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:50.987 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.554 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:51.554 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:12:51.554 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.554 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:51.554 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:51.554 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:54.084 /dev/nvme0n2 ]] 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:54.084 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:54.084 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:54.084 rmmod nvme_tcp 00:12:54.343 rmmod nvme_fabrics 00:12:54.343 rmmod nvme_keyring 00:12:54.343 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:54.343 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:54.343 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:54.343 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 994753 ']' 00:12:54.343 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 994753 00:12:54.343 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 994753 ']' 00:12:54.343 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 994753 00:12:54.343 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:12:54.343 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.343 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 994753 00:12:54.343 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.343 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.343 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 994753' 00:12:54.343 killing process with pid 994753 00:12:54.343 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 994753 00:12:54.343 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 994753 00:12:54.600 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:54.600 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:54.600 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:54.600 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:12:54.600 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:12:54.600 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:54.600 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:12:54.600 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:54.600 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:54.600 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.600 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.600 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.508 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:56.508 00:12:56.508 real 0m8.746s 00:12:56.508 user 0m16.656s 00:12:56.508 sys 0m2.425s 00:12:56.508 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.508 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:56.508 ************************************ 00:12:56.508 END TEST nvmf_nvme_cli 00:12:56.508 ************************************ 00:12:56.508 12:34:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:56.508 12:34:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:56.508 12:34:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:56.508 12:34:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.508 12:34:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:56.767 ************************************ 00:12:56.767 START TEST nvmf_vfio_user 00:12:56.767 ************************************ 00:12:56.767 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:56.767 * Looking for test storage... 00:12:56.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.767 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:12:56.768 12:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:56.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.768 --rc genhtml_branch_coverage=1 00:12:56.768 --rc genhtml_function_coverage=1 00:12:56.768 --rc genhtml_legend=1 00:12:56.768 --rc geninfo_all_blocks=1 00:12:56.768 --rc geninfo_unexecuted_blocks=1 00:12:56.768 00:12:56.768 ' 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:56.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.768 --rc genhtml_branch_coverage=1 00:12:56.768 --rc genhtml_function_coverage=1 00:12:56.768 --rc genhtml_legend=1 00:12:56.768 --rc geninfo_all_blocks=1 00:12:56.768 --rc geninfo_unexecuted_blocks=1 00:12:56.768 00:12:56.768 ' 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:56.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.768 --rc genhtml_branch_coverage=1 00:12:56.768 --rc genhtml_function_coverage=1 00:12:56.768 --rc genhtml_legend=1 00:12:56.768 --rc geninfo_all_blocks=1 00:12:56.768 --rc geninfo_unexecuted_blocks=1 00:12:56.768 00:12:56.768 ' 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:56.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.768 --rc genhtml_branch_coverage=1 00:12:56.768 --rc genhtml_function_coverage=1 00:12:56.768 --rc genhtml_legend=1 00:12:56.768 --rc geninfo_all_blocks=1 00:12:56.768 --rc geninfo_unexecuted_blocks=1 00:12:56.768 00:12:56.768 ' 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:56.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=995683 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 995683' 00:12:56.768 Process pid: 995683 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 995683 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 995683 ']' 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.768 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:56.769 [2024-11-15 12:34:37.078687] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:12:56.769 [2024-11-15 12:34:37.078807] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.027 [2024-11-15 12:34:37.146428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.027 [2024-11-15 12:34:37.206529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.027 [2024-11-15 12:34:37.206592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.027 [2024-11-15 12:34:37.206619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.027 [2024-11-15 12:34:37.206630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.027 [2024-11-15 12:34:37.206640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.027 [2024-11-15 12:34:37.208291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.027 [2024-11-15 12:34:37.208356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.027 [2024-11-15 12:34:37.208406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.027 [2024-11-15 12:34:37.208409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.027 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.027 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:12:57.027 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:58.401 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:58.401 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:58.401 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:58.401 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:58.401 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:58.401 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:58.659 Malloc1 00:12:58.659 12:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:58.916 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:59.174 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:59.431 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:59.431 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:59.431 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:59.996 Malloc2 00:12:59.996 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:00.254 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:00.512 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:00.772 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:00.772 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:00.772 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:00.772 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:00.772 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:00.772 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:00.772 [2024-11-15 12:34:40.928357] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:13:00.772 [2024-11-15 12:34:40.928401] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996109 ] 00:13:00.772 [2024-11-15 12:34:40.979799] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:00.772 [2024-11-15 12:34:40.982300] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:00.772 [2024-11-15 12:34:40.982332] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff392a8b000 00:13:00.772 [2024-11-15 12:34:40.983297] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:00.772 [2024-11-15 12:34:40.984292] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:00.772 [2024-11-15 12:34:40.985295] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:00.772 [2024-11-15 12:34:40.986297] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:00.772 [2024-11-15 12:34:40.987303] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:00.772 [2024-11-15 12:34:40.988309] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:00.772 [2024-11-15 12:34:40.989310] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:00.772 [2024-11-15 12:34:40.990318] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:00.772 [2024-11-15 12:34:40.991324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:00.772 [2024-11-15 12:34:40.991351] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff392a80000 00:13:00.772 [2024-11-15 12:34:40.992475] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:00.772 [2024-11-15 12:34:41.006563] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:00.772 [2024-11-15 12:34:41.006606] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:00.772 [2024-11-15 12:34:41.015463] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:00.772 [2024-11-15 12:34:41.015516] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:00.772 [2024-11-15 12:34:41.015599] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:00.772 [2024-11-15 12:34:41.015625] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:00.772 [2024-11-15 12:34:41.015636] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:00.772 [2024-11-15 12:34:41.016457] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:00.772 [2024-11-15 12:34:41.016477] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:00.772 [2024-11-15 12:34:41.016490] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:00.772 [2024-11-15 12:34:41.017456] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:00.772 [2024-11-15 12:34:41.017474] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:00.772 [2024-11-15 12:34:41.017488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:00.772 [2024-11-15 12:34:41.018462] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:00.772 [2024-11-15 12:34:41.018481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:00.772 [2024-11-15 12:34:41.019464] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:00.772 [2024-11-15 12:34:41.019484] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:00.772 [2024-11-15 12:34:41.019493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:00.772 [2024-11-15 12:34:41.019504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:00.772 [2024-11-15 12:34:41.019614] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:00.772 [2024-11-15 12:34:41.019621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:00.772 [2024-11-15 12:34:41.019630] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:00.772 [2024-11-15 12:34:41.020478] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:00.772 [2024-11-15 12:34:41.021477] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:00.772 [2024-11-15 12:34:41.022484] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:00.772 [2024-11-15 12:34:41.023482] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:00.772 [2024-11-15 12:34:41.023593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:00.772 [2024-11-15 12:34:41.024496] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:00.772 [2024-11-15 12:34:41.024514] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:00.772 [2024-11-15 12:34:41.024523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:00.772 [2024-11-15 12:34:41.024547] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:00.772 [2024-11-15 12:34:41.024564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:00.772 [2024-11-15 12:34:41.024590] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:00.772 [2024-11-15 12:34:41.024600] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:00.772 [2024-11-15 12:34:41.024607] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:00.772 [2024-11-15 12:34:41.024624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:00.772 [2024-11-15 12:34:41.024681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:00.772 [2024-11-15 12:34:41.024711] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:00.772 [2024-11-15 12:34:41.024732] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:00.772 [2024-11-15 12:34:41.024741] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:00.773 [2024-11-15 12:34:41.024749] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:00.773 [2024-11-15 12:34:41.024777] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:00.773 [2024-11-15 12:34:41.024786] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:00.773 [2024-11-15 12:34:41.024795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.024812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.024829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:00.773 [2024-11-15 12:34:41.024847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:00.773 [2024-11-15 12:34:41.024864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.773 [2024-11-15 12:34:41.024881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.773 [2024-11-15 12:34:41.024894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.773 [2024-11-15 12:34:41.024907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.773 [2024-11-15 12:34:41.024915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.024927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.024940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:00.773 [2024-11-15 12:34:41.024952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:00.773 [2024-11-15 12:34:41.024967] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:00.773 [2024-11-15 12:34:41.024977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.024988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.024998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.025026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:00.773 [2024-11-15 12:34:41.025038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:00.773 [2024-11-15 12:34:41.025118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.025135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.025148] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:00.773 [2024-11-15 12:34:41.025156] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:00.773 [2024-11-15 12:34:41.025162] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:00.773 [2024-11-15 12:34:41.025171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:00.773 [2024-11-15 12:34:41.025187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:00.773 [2024-11-15 12:34:41.025203] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:00.773 [2024-11-15 12:34:41.025221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.025236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.025247] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:00.773 [2024-11-15 12:34:41.025256] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:00.773 [2024-11-15 12:34:41.025261] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:00.773 [2024-11-15 12:34:41.025274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:00.773 [2024-11-15 12:34:41.025299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:00.773 [2024-11-15 12:34:41.025320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.025334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.025346] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:00.773 [2024-11-15 12:34:41.025354] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:00.773 [2024-11-15 12:34:41.025360] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:00.773 [2024-11-15 12:34:41.025369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:00.773 [2024-11-15 12:34:41.025383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:00.773 [2024-11-15 12:34:41.025396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.025407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.025420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.025430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.025438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.025446] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.025454] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:00.773 [2024-11-15 12:34:41.025461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:00.773 [2024-11-15 12:34:41.025469] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:00.773 [2024-11-15 12:34:41.025494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:00.773 [2024-11-15 12:34:41.025511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:00.773 [2024-11-15 12:34:41.025530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:00.773 [2024-11-15 12:34:41.025541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:00.773 [2024-11-15 12:34:41.025556] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:00.773 [2024-11-15 12:34:41.025567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:00.773 [2024-11-15 12:34:41.025582] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:00.773 [2024-11-15 12:34:41.025597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:00.773 [2024-11-15 12:34:41.025619] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:00.773 [2024-11-15 12:34:41.025629] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:00.773 [2024-11-15 12:34:41.025635] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:00.773 [2024-11-15 12:34:41.025640] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:00.773 [2024-11-15 12:34:41.025646] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:00.773 [2024-11-15 12:34:41.025655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:00.773 [2024-11-15 12:34:41.025666] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:00.773 [2024-11-15 12:34:41.025674] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:00.773 [2024-11-15 12:34:41.025680] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:00.773 [2024-11-15 12:34:41.025689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:00.773 [2024-11-15 12:34:41.025715] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:00.773 [2024-11-15 12:34:41.025737] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:00.773 [2024-11-15 12:34:41.025744] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:00.773 [2024-11-15 12:34:41.025753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:00.773 [2024-11-15 12:34:41.025766] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:00.773 [2024-11-15 12:34:41.025775] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:00.773 [2024-11-15 12:34:41.025781] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:00.773 [2024-11-15 12:34:41.025790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:00.773 [2024-11-15 12:34:41.025802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:00.773 [2024-11-15 12:34:41.025826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:00.773 [2024-11-15 12:34:41.025845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:00.773 [2024-11-15 12:34:41.025859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:00.773 ===================================================== 00:13:00.774 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:00.774 ===================================================== 00:13:00.774 Controller Capabilities/Features 00:13:00.774 ================================ 00:13:00.774 Vendor ID: 4e58 00:13:00.774 Subsystem Vendor ID: 4e58 00:13:00.774 Serial Number: SPDK1 00:13:00.774 Model Number: SPDK bdev Controller 00:13:00.774 Firmware Version: 25.01 00:13:00.774 Recommended Arb Burst: 6 00:13:00.774 IEEE OUI Identifier: 8d 6b 50 00:13:00.774 Multi-path I/O 00:13:00.774 May have multiple subsystem ports: Yes 00:13:00.774 May have multiple controllers: Yes 00:13:00.774 Associated with SR-IOV VF: No 00:13:00.774 Max Data Transfer Size: 131072 00:13:00.774 Max Number of Namespaces: 32 00:13:00.774 Max Number of I/O Queues: 127 00:13:00.774 NVMe Specification Version (VS): 1.3 00:13:00.774 NVMe Specification Version (Identify): 1.3 00:13:00.774 Maximum Queue Entries: 256 00:13:00.774 Contiguous Queues Required: Yes 00:13:00.774 Arbitration Mechanisms Supported 00:13:00.774 Weighted Round Robin: Not Supported 00:13:00.774 Vendor Specific: Not Supported 00:13:00.774 Reset Timeout: 15000 ms 00:13:00.774 Doorbell Stride: 4 bytes 00:13:00.774 NVM Subsystem Reset: Not Supported 00:13:00.774 Command Sets Supported 00:13:00.774 NVM Command Set: Supported 00:13:00.774 Boot Partition: Not Supported 00:13:00.774 Memory Page Size Minimum: 4096 bytes 00:13:00.774 Memory Page Size Maximum: 4096 bytes 00:13:00.774 Persistent Memory Region: Not Supported 00:13:00.774 Optional Asynchronous Events Supported 00:13:00.774 Namespace Attribute Notices: Supported 00:13:00.774 Firmware Activation Notices: Not Supported 00:13:00.774 ANA Change Notices: Not Supported 00:13:00.774 PLE Aggregate Log Change Notices: Not Supported 00:13:00.774 LBA Status Info Alert Notices: Not Supported 00:13:00.774 EGE Aggregate Log Change Notices: Not Supported 00:13:00.774 Normal NVM Subsystem Shutdown event: Not Supported 00:13:00.774 Zone Descriptor Change Notices: Not Supported 00:13:00.774 Discovery Log Change Notices: Not Supported 00:13:00.774 Controller Attributes 00:13:00.774 128-bit Host Identifier: Supported 00:13:00.774 Non-Operational Permissive Mode: Not Supported 00:13:00.774 NVM Sets: Not Supported 00:13:00.774 Read Recovery Levels: Not Supported 00:13:00.774 Endurance Groups: Not Supported 00:13:00.774 Predictable Latency Mode: Not Supported 00:13:00.774 Traffic Based Keep ALive: Not Supported 00:13:00.774 Namespace Granularity: Not Supported 00:13:00.774 SQ Associations: Not Supported 00:13:00.774 UUID List: Not Supported 00:13:00.774 Multi-Domain Subsystem: Not Supported 00:13:00.774 Fixed Capacity Management: Not Supported 00:13:00.774 Variable Capacity Management: Not Supported 00:13:00.774 Delete Endurance Group: Not Supported 00:13:00.774 Delete NVM Set: Not Supported 00:13:00.774 Extended LBA Formats Supported: Not Supported 00:13:00.774 Flexible Data Placement Supported: Not Supported 00:13:00.774 00:13:00.774 Controller Memory Buffer Support 00:13:00.774 ================================ 00:13:00.774 Supported: No 00:13:00.774 00:13:00.774 Persistent Memory Region Support 00:13:00.774 ================================ 00:13:00.774 Supported: No 00:13:00.774 00:13:00.774 Admin Command Set Attributes 00:13:00.774 ============================ 00:13:00.774 Security Send/Receive: Not Supported 00:13:00.774 Format NVM: Not Supported 00:13:00.774 Firmware Activate/Download: Not Supported 00:13:00.774 Namespace Management: Not Supported 00:13:00.774 Device Self-Test: Not Supported 00:13:00.774 Directives: Not Supported 00:13:00.774 NVMe-MI: Not Supported 00:13:00.774 Virtualization Management: Not Supported 00:13:00.774 Doorbell Buffer Config: Not Supported 00:13:00.774 Get LBA Status Capability: Not Supported 00:13:00.774 Command & Feature Lockdown Capability: Not Supported 00:13:00.774 Abort Command Limit: 4 00:13:00.774 Async Event Request Limit: 4 00:13:00.774 Number of Firmware Slots: N/A 00:13:00.774 Firmware Slot 1 Read-Only: N/A 00:13:00.774 Firmware Activation Without Reset: N/A 00:13:00.774 Multiple Update Detection Support: N/A 00:13:00.774 Firmware Update Granularity: No Information Provided 00:13:00.774 Per-Namespace SMART Log: No 00:13:00.774 Asymmetric Namespace Access Log Page: Not Supported 00:13:00.774 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:00.774 Command Effects Log Page: Supported 00:13:00.774 Get Log Page Extended Data: Supported 00:13:00.774 Telemetry Log Pages: Not Supported 00:13:00.774 Persistent Event Log Pages: Not Supported 00:13:00.774 Supported Log Pages Log Page: May Support 00:13:00.774 Commands Supported & Effects Log Page: Not Supported 00:13:00.774 Feature Identifiers & Effects Log Page:May Support 00:13:00.774 NVMe-MI Commands & Effects Log Page: May Support 00:13:00.774 Data Area 4 for Telemetry Log: Not Supported 00:13:00.774 Error Log Page Entries Supported: 128 00:13:00.774 Keep Alive: Supported 00:13:00.774 Keep Alive Granularity: 10000 ms 00:13:00.774 00:13:00.774 NVM Command Set Attributes 00:13:00.774 ========================== 00:13:00.774 Submission Queue Entry Size 00:13:00.774 Max: 64 00:13:00.774 Min: 64 00:13:00.774 Completion Queue Entry Size 00:13:00.774 Max: 16 00:13:00.774 Min: 16 00:13:00.774 Number of Namespaces: 32 00:13:00.774 Compare Command: Supported 00:13:00.774 Write Uncorrectable Command: Not Supported 00:13:00.774 Dataset Management Command: Supported 00:13:00.774 Write Zeroes Command: Supported 00:13:00.774 Set Features Save Field: Not Supported 00:13:00.774 Reservations: Not Supported 00:13:00.774 Timestamp: Not Supported 00:13:00.774 Copy: Supported 00:13:00.774 Volatile Write Cache: Present 00:13:00.774 Atomic Write Unit (Normal): 1 00:13:00.774 Atomic Write Unit (PFail): 1 00:13:00.774 Atomic Compare & Write Unit: 1 00:13:00.774 Fused Compare & Write: Supported 00:13:00.774 Scatter-Gather List 00:13:00.774 SGL Command Set: Supported (Dword aligned) 00:13:00.774 SGL Keyed: Not Supported 00:13:00.774 SGL Bit Bucket Descriptor: Not Supported 00:13:00.774 SGL Metadata Pointer: Not Supported 00:13:00.774 Oversized SGL: Not Supported 00:13:00.774 SGL Metadata Address: Not Supported 00:13:00.774 SGL Offset: Not Supported 00:13:00.774 Transport SGL Data Block: Not Supported 00:13:00.774 Replay Protected Memory Block: Not Supported 00:13:00.774 00:13:00.774 Firmware Slot Information 00:13:00.774 ========================= 00:13:00.774 Active slot: 1 00:13:00.774 Slot 1 Firmware Revision: 25.01 00:13:00.774 00:13:00.774 00:13:00.774 Commands Supported and Effects 00:13:00.774 ============================== 00:13:00.774 Admin Commands 00:13:00.774 -------------- 00:13:00.774 Get Log Page (02h): Supported 00:13:00.774 Identify (06h): Supported 00:13:00.774 Abort (08h): Supported 00:13:00.774 Set Features (09h): Supported 00:13:00.774 Get Features (0Ah): Supported 00:13:00.774 Asynchronous Event Request (0Ch): Supported 00:13:00.774 Keep Alive (18h): Supported 00:13:00.774 I/O Commands 00:13:00.774 ------------ 00:13:00.774 Flush (00h): Supported LBA-Change 00:13:00.774 Write (01h): Supported LBA-Change 00:13:00.774 Read (02h): Supported 00:13:00.774 Compare (05h): Supported 00:13:00.774 Write Zeroes (08h): Supported LBA-Change 00:13:00.774 Dataset Management (09h): Supported LBA-Change 00:13:00.774 Copy (19h): Supported LBA-Change 00:13:00.774 00:13:00.774 Error Log 00:13:00.774 ========= 00:13:00.774 00:13:00.774 Arbitration 00:13:00.774 =========== 00:13:00.774 Arbitration Burst: 1 00:13:00.774 00:13:00.774 Power Management 00:13:00.774 ================ 00:13:00.774 Number of Power States: 1 00:13:00.774 Current Power State: Power State #0 00:13:00.774 Power State #0: 00:13:00.774 Max Power: 0.00 W 00:13:00.774 Non-Operational State: Operational 00:13:00.774 Entry Latency: Not Reported 00:13:00.774 Exit Latency: Not Reported 00:13:00.774 Relative Read Throughput: 0 00:13:00.774 Relative Read Latency: 0 00:13:00.774 Relative Write Throughput: 0 00:13:00.774 Relative Write Latency: 0 00:13:00.774 Idle Power: Not Reported 00:13:00.774 Active Power: Not Reported 00:13:00.774 Non-Operational Permissive Mode: Not Supported 00:13:00.774 00:13:00.774 Health Information 00:13:00.774 ================== 00:13:00.774 Critical Warnings: 00:13:00.774 Available Spare Space: OK 00:13:00.774 Temperature: OK 00:13:00.774 Device Reliability: OK 00:13:00.774 Read Only: No 00:13:00.774 Volatile Memory Backup: OK 00:13:00.774 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:00.774 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:00.774 Available Spare: 0% 00:13:00.775 Available Sp[2024-11-15 12:34:41.025987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:00.775 [2024-11-15 12:34:41.026019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:00.775 [2024-11-15 12:34:41.026062] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:00.775 [2024-11-15 12:34:41.026079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.775 [2024-11-15 12:34:41.026090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.775 [2024-11-15 12:34:41.026103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.775 [2024-11-15 12:34:41.026112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.775 [2024-11-15 12:34:41.026505] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:00.775 [2024-11-15 12:34:41.026524] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:00.775 [2024-11-15 12:34:41.027506] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:00.775 [2024-11-15 12:34:41.027599] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:00.775 [2024-11-15 12:34:41.027614] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:00.775 [2024-11-15 12:34:41.028519] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:00.775 [2024-11-15 12:34:41.028542] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:00.775 [2024-11-15 12:34:41.028595] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:00.775 [2024-11-15 12:34:41.031731] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:00.775 are Threshold: 0% 00:13:00.775 Life Percentage Used: 0% 00:13:00.775 Data Units Read: 0 00:13:00.775 Data Units Written: 0 00:13:00.775 Host Read Commands: 0 00:13:00.775 Host Write Commands: 0 00:13:00.775 Controller Busy Time: 0 minutes 00:13:00.775 Power Cycles: 0 00:13:00.775 Power On Hours: 0 hours 00:13:00.775 Unsafe Shutdowns: 0 00:13:00.775 Unrecoverable Media Errors: 0 00:13:00.775 Lifetime Error Log Entries: 0 00:13:00.775 Warning Temperature Time: 0 minutes 00:13:00.775 Critical Temperature Time: 0 minutes 00:13:00.775 00:13:00.775 Number of Queues 00:13:00.775 ================ 00:13:00.775 Number of I/O Submission Queues: 127 00:13:00.775 Number of I/O Completion Queues: 127 00:13:00.775 00:13:00.775 Active Namespaces 00:13:00.775 ================= 00:13:00.775 Namespace ID:1 00:13:00.775 Error Recovery Timeout: Unlimited 00:13:00.775 Command Set Identifier: NVM (00h) 00:13:00.775 Deallocate: Supported 00:13:00.775 Deallocated/Unwritten Error: Not Supported 00:13:00.775 Deallocated Read Value: Unknown 00:13:00.775 Deallocate in Write Zeroes: Not Supported 00:13:00.775 Deallocated Guard Field: 0xFFFF 00:13:00.775 Flush: Supported 00:13:00.775 Reservation: Supported 00:13:00.775 Namespace Sharing Capabilities: Multiple Controllers 00:13:00.775 Size (in LBAs): 131072 (0GiB) 00:13:00.775 Capacity (in LBAs): 131072 (0GiB) 00:13:00.775 Utilization (in LBAs): 131072 (0GiB) 00:13:00.775 NGUID: 6F78C57886164694840234139254679A 00:13:00.775 UUID: 6f78c578-8616-4694-8402-34139254679a 00:13:00.775 Thin Provisioning: Not Supported 00:13:00.775 Per-NS Atomic Units: Yes 00:13:00.775 Atomic Boundary Size (Normal): 0 00:13:00.775 Atomic Boundary Size (PFail): 0 00:13:00.775 Atomic Boundary Offset: 0 00:13:00.775 Maximum Single Source Range Length: 65535 00:13:00.775 Maximum Copy Length: 65535 00:13:00.775 Maximum Source Range Count: 1 00:13:00.775 NGUID/EUI64 Never Reused: No 00:13:00.775 Namespace Write Protected: No 00:13:00.775 Number of LBA Formats: 1 00:13:00.775 Current LBA Format: LBA Format #00 00:13:00.775 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:00.775 00:13:00.775 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:01.033 [2024-11-15 12:34:41.282087] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:06.299 Initializing NVMe Controllers 00:13:06.299 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:06.299 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:06.299 Initialization complete. Launching workers. 00:13:06.299 ======================================================== 00:13:06.299 Latency(us) 00:13:06.299 Device Information : IOPS MiB/s Average min max 00:13:06.299 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32951.20 128.72 3884.09 1204.71 8976.39 00:13:06.299 ======================================================== 00:13:06.299 Total : 32951.20 128.72 3884.09 1204.71 8976.39 00:13:06.299 00:13:06.299 [2024-11-15 12:34:46.304597] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:06.299 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:06.299 [2024-11-15 12:34:46.555793] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:11.565 Initializing NVMe Controllers 00:13:11.565 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:11.565 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:11.565 Initialization complete. Launching workers. 00:13:11.565 ======================================================== 00:13:11.565 Latency(us) 00:13:11.565 Device Information : IOPS MiB/s Average min max 00:13:11.565 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15943.57 62.28 8033.53 4966.75 15979.49 00:13:11.565 ======================================================== 00:13:11.565 Total : 15943.57 62.28 8033.53 4966.75 15979.49 00:13:11.565 00:13:11.565 [2024-11-15 12:34:51.596213] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:11.565 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:11.565 [2024-11-15 12:34:51.817280] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:16.833 [2024-11-15 12:34:56.891070] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:16.833 Initializing NVMe Controllers 00:13:16.833 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:16.833 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:16.833 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:16.833 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:16.833 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:16.833 Initialization complete. Launching workers. 00:13:16.833 Starting thread on core 2 00:13:16.833 Starting thread on core 3 00:13:16.833 Starting thread on core 1 00:13:16.833 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:17.092 [2024-11-15 12:34:57.217205] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:20.374 [2024-11-15 12:35:00.289473] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:20.375 Initializing NVMe Controllers 00:13:20.375 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:20.375 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:20.375 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:20.375 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:20.375 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:20.375 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:20.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:20.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:20.375 Initialization complete. Launching workers. 00:13:20.375 Starting thread on core 1 with urgent priority queue 00:13:20.375 Starting thread on core 2 with urgent priority queue 00:13:20.375 Starting thread on core 3 with urgent priority queue 00:13:20.375 Starting thread on core 0 with urgent priority queue 00:13:20.375 SPDK bdev Controller (SPDK1 ) core 0: 4408.00 IO/s 22.69 secs/100000 ios 00:13:20.375 SPDK bdev Controller (SPDK1 ) core 1: 5415.00 IO/s 18.47 secs/100000 ios 00:13:20.375 SPDK bdev Controller (SPDK1 ) core 2: 5620.67 IO/s 17.79 secs/100000 ios 00:13:20.375 SPDK bdev Controller (SPDK1 ) core 3: 5592.00 IO/s 17.88 secs/100000 ios 00:13:20.375 ======================================================== 00:13:20.375 00:13:20.375 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:20.375 [2024-11-15 12:35:00.617228] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:20.375 Initializing NVMe Controllers 00:13:20.375 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:20.375 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:20.375 Namespace ID: 1 size: 0GB 00:13:20.375 Initialization complete. 00:13:20.375 INFO: using host memory buffer for IO 00:13:20.375 Hello world! 00:13:20.375 [2024-11-15 12:35:00.651884] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:20.375 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:20.632 [2024-11-15 12:35:00.966580] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:22.051 Initializing NVMe Controllers 00:13:22.051 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.051 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.051 Initialization complete. Launching workers. 00:13:22.051 submit (in ns) avg, min, max = 7791.2, 3552.2, 4023832.2 00:13:22.051 complete (in ns) avg, min, max = 28692.9, 2064.4, 4035191.1 00:13:22.051 00:13:22.051 Submit histogram 00:13:22.051 ================ 00:13:22.051 Range in us Cumulative Count 00:13:22.051 3.532 - 3.556: 0.0236% ( 3) 00:13:22.051 3.556 - 3.579: 0.8178% ( 101) 00:13:22.051 3.579 - 3.603: 2.0996% ( 163) 00:13:22.051 3.603 - 3.627: 5.6303% ( 449) 00:13:22.051 3.627 - 3.650: 11.2684% ( 717) 00:13:22.051 3.650 - 3.674: 19.2734% ( 1018) 00:13:22.051 3.674 - 3.698: 27.7503% ( 1078) 00:13:22.051 3.698 - 3.721: 36.3215% ( 1090) 00:13:22.051 3.721 - 3.745: 43.5716% ( 922) 00:13:22.051 3.745 - 3.769: 49.9646% ( 813) 00:13:22.051 3.769 - 3.793: 54.9894% ( 639) 00:13:22.051 3.793 - 3.816: 59.5817% ( 584) 00:13:22.051 3.816 - 3.840: 63.3719% ( 482) 00:13:22.051 3.840 - 3.864: 67.5081% ( 526) 00:13:22.051 3.864 - 3.887: 70.8815% ( 429) 00:13:22.051 3.887 - 3.911: 74.3257% ( 438) 00:13:22.051 3.911 - 3.935: 78.3597% ( 513) 00:13:22.051 3.935 - 3.959: 81.0962% ( 348) 00:13:22.052 3.959 - 3.982: 83.3923% ( 292) 00:13:22.052 3.982 - 4.006: 85.7120% ( 295) 00:13:22.052 4.006 - 4.030: 87.5285% ( 231) 00:13:22.052 4.030 - 4.053: 89.1012% ( 200) 00:13:22.052 4.053 - 4.077: 90.4459% ( 171) 00:13:22.052 4.077 - 4.101: 91.6568% ( 154) 00:13:22.052 4.101 - 4.124: 92.5297% ( 111) 00:13:22.052 4.124 - 4.148: 93.1981% ( 85) 00:13:22.052 4.148 - 4.172: 93.6227% ( 54) 00:13:22.052 4.172 - 4.196: 94.0159% ( 50) 00:13:22.052 4.196 - 4.219: 94.3540% ( 43) 00:13:22.052 4.219 - 4.243: 94.5663% ( 27) 00:13:22.052 4.243 - 4.267: 94.6607% ( 12) 00:13:22.052 4.267 - 4.290: 94.7944% ( 17) 00:13:22.052 4.290 - 4.314: 94.9516% ( 20) 00:13:22.052 4.314 - 4.338: 95.0303% ( 10) 00:13:22.052 4.338 - 4.361: 95.1404% ( 14) 00:13:22.052 4.361 - 4.385: 95.2505% ( 14) 00:13:22.052 4.385 - 4.409: 95.3370% ( 11) 00:13:22.052 4.409 - 4.433: 95.3605% ( 3) 00:13:22.052 4.433 - 4.456: 95.3684% ( 1) 00:13:22.052 4.456 - 4.480: 95.4077% ( 5) 00:13:22.052 4.480 - 4.504: 95.4234% ( 2) 00:13:22.052 4.527 - 4.551: 95.4549% ( 4) 00:13:22.052 4.551 - 4.575: 95.4785% ( 3) 00:13:22.052 4.575 - 4.599: 95.4864% ( 1) 00:13:22.052 4.599 - 4.622: 95.5257% ( 5) 00:13:22.052 4.622 - 4.646: 95.5571% ( 4) 00:13:22.052 4.646 - 4.670: 95.6279% ( 9) 00:13:22.052 4.670 - 4.693: 95.6829% ( 7) 00:13:22.052 4.693 - 4.717: 95.7459% ( 8) 00:13:22.052 4.717 - 4.741: 95.8166% ( 9) 00:13:22.052 4.741 - 4.764: 95.8717% ( 7) 00:13:22.052 4.764 - 4.788: 95.9267% ( 7) 00:13:22.052 4.788 - 4.812: 96.0211% ( 12) 00:13:22.052 4.812 - 4.836: 96.0604% ( 5) 00:13:22.052 4.836 - 4.859: 96.1705% ( 14) 00:13:22.052 4.859 - 4.883: 96.2570% ( 11) 00:13:22.052 4.883 - 4.907: 96.3356% ( 10) 00:13:22.052 4.907 - 4.930: 96.4221% ( 11) 00:13:22.052 4.930 - 4.954: 96.4693% ( 6) 00:13:22.052 4.954 - 4.978: 96.5479% ( 10) 00:13:22.052 4.978 - 5.001: 96.6030% ( 7) 00:13:22.052 5.001 - 5.025: 96.6737% ( 9) 00:13:22.052 5.025 - 5.049: 96.7288% ( 7) 00:13:22.052 5.049 - 5.073: 96.7838% ( 7) 00:13:22.052 5.073 - 5.096: 96.8310% ( 6) 00:13:22.052 5.096 - 5.120: 96.9332% ( 13) 00:13:22.052 5.120 - 5.144: 97.0119% ( 10) 00:13:22.052 5.144 - 5.167: 97.1141% ( 13) 00:13:22.052 5.167 - 5.191: 97.1770% ( 8) 00:13:22.052 5.191 - 5.215: 97.2321% ( 7) 00:13:22.052 5.215 - 5.239: 97.3657% ( 17) 00:13:22.052 5.239 - 5.262: 97.4522% ( 11) 00:13:22.052 5.262 - 5.286: 97.4994% ( 6) 00:13:22.052 5.286 - 5.310: 97.6095% ( 14) 00:13:22.052 5.310 - 5.333: 97.6724% ( 8) 00:13:22.052 5.333 - 5.357: 97.7196% ( 6) 00:13:22.052 5.357 - 5.381: 97.7510% ( 4) 00:13:22.052 5.381 - 5.404: 97.8139% ( 8) 00:13:22.052 5.404 - 5.428: 97.8375% ( 3) 00:13:22.052 5.428 - 5.452: 97.9004% ( 8) 00:13:22.052 5.452 - 5.476: 97.9555% ( 7) 00:13:22.052 5.476 - 5.499: 97.9791% ( 3) 00:13:22.052 5.499 - 5.523: 98.0105% ( 4) 00:13:22.052 5.523 - 5.547: 98.0263% ( 2) 00:13:22.052 5.547 - 5.570: 98.0420% ( 2) 00:13:22.052 5.570 - 5.594: 98.0577% ( 2) 00:13:22.052 5.594 - 5.618: 98.0656% ( 1) 00:13:22.052 5.618 - 5.641: 98.1049% ( 5) 00:13:22.052 5.665 - 5.689: 98.1364% ( 4) 00:13:22.052 5.736 - 5.760: 98.1442% ( 1) 00:13:22.052 5.760 - 5.784: 98.1599% ( 2) 00:13:22.052 5.784 - 5.807: 98.1678% ( 1) 00:13:22.052 5.879 - 5.902: 98.1757% ( 1) 00:13:22.052 5.997 - 6.021: 98.1835% ( 1) 00:13:22.052 6.021 - 6.044: 98.1914% ( 1) 00:13:22.052 6.068 - 6.116: 98.2071% ( 2) 00:13:22.052 6.116 - 6.163: 98.2150% ( 1) 00:13:22.052 6.210 - 6.258: 98.2229% ( 1) 00:13:22.052 6.258 - 6.305: 98.2386% ( 2) 00:13:22.052 6.305 - 6.353: 98.2543% ( 2) 00:13:22.052 6.353 - 6.400: 98.2700% ( 2) 00:13:22.052 6.447 - 6.495: 98.2779% ( 1) 00:13:22.052 6.590 - 6.637: 98.2858% ( 1) 00:13:22.052 6.684 - 6.732: 98.2936% ( 1) 00:13:22.052 6.779 - 6.827: 98.3015% ( 1) 00:13:22.052 6.874 - 6.921: 98.3093% ( 1) 00:13:22.052 6.969 - 7.016: 98.3172% ( 1) 00:13:22.052 7.064 - 7.111: 98.3251% ( 1) 00:13:22.052 7.111 - 7.159: 98.3329% ( 1) 00:13:22.052 7.159 - 7.206: 98.3487% ( 2) 00:13:22.052 7.301 - 7.348: 98.3565% ( 1) 00:13:22.052 7.348 - 7.396: 98.3644% ( 1) 00:13:22.052 7.633 - 7.680: 98.3723% ( 1) 00:13:22.052 7.870 - 7.917: 98.3801% ( 1) 00:13:22.052 7.917 - 7.964: 98.3880% ( 1) 00:13:22.052 8.012 - 8.059: 98.3958% ( 1) 00:13:22.052 8.249 - 8.296: 98.4037% ( 1) 00:13:22.052 8.391 - 8.439: 98.4194% ( 2) 00:13:22.052 8.439 - 8.486: 98.4273% ( 1) 00:13:22.052 8.533 - 8.581: 98.4430% ( 2) 00:13:22.052 8.581 - 8.628: 98.4509% ( 1) 00:13:22.052 8.676 - 8.723: 98.4588% ( 1) 00:13:22.052 8.770 - 8.818: 98.4666% ( 1) 00:13:22.052 8.818 - 8.865: 98.4745% ( 1) 00:13:22.052 8.865 - 8.913: 98.4823% ( 1) 00:13:22.052 8.913 - 8.960: 98.4902% ( 1) 00:13:22.052 8.960 - 9.007: 98.4981% ( 1) 00:13:22.052 9.055 - 9.102: 98.5295% ( 4) 00:13:22.052 9.150 - 9.197: 98.5531% ( 3) 00:13:22.052 9.292 - 9.339: 98.5767% ( 3) 00:13:22.052 9.434 - 9.481: 98.5846% ( 1) 00:13:22.052 9.529 - 9.576: 98.5924% ( 1) 00:13:22.052 9.624 - 9.671: 98.6003% ( 1) 00:13:22.052 9.671 - 9.719: 98.6082% ( 1) 00:13:22.052 9.766 - 9.813: 98.6239% ( 2) 00:13:22.052 9.813 - 9.861: 98.6396% ( 2) 00:13:22.052 9.861 - 9.908: 98.6475% ( 1) 00:13:22.052 9.908 - 9.956: 98.6632% ( 2) 00:13:22.052 10.003 - 10.050: 98.6789% ( 2) 00:13:22.052 10.050 - 10.098: 98.6868% ( 1) 00:13:22.052 10.098 - 10.145: 98.6947% ( 1) 00:13:22.052 10.145 - 10.193: 98.7104% ( 2) 00:13:22.052 10.287 - 10.335: 98.7183% ( 1) 00:13:22.052 10.382 - 10.430: 98.7261% ( 1) 00:13:22.052 10.430 - 10.477: 98.7340% ( 1) 00:13:22.052 10.477 - 10.524: 98.7497% ( 2) 00:13:22.052 10.572 - 10.619: 98.7733% ( 3) 00:13:22.052 10.619 - 10.667: 98.7812% ( 1) 00:13:22.052 10.714 - 10.761: 98.7890% ( 1) 00:13:22.052 10.809 - 10.856: 98.7969% ( 1) 00:13:22.052 10.904 - 10.951: 98.8047% ( 1) 00:13:22.052 11.141 - 11.188: 98.8126% ( 1) 00:13:22.052 11.236 - 11.283: 98.8205% ( 1) 00:13:22.052 11.283 - 11.330: 98.8283% ( 1) 00:13:22.052 11.378 - 11.425: 98.8441% ( 2) 00:13:22.052 11.662 - 11.710: 98.8519% ( 1) 00:13:22.052 11.852 - 11.899: 98.8598% ( 1) 00:13:22.052 12.231 - 12.326: 98.8677% ( 1) 00:13:22.052 12.326 - 12.421: 98.8834% ( 2) 00:13:22.052 12.516 - 12.610: 98.8991% ( 2) 00:13:22.052 12.705 - 12.800: 98.9070% ( 1) 00:13:22.052 12.895 - 12.990: 98.9148% ( 1) 00:13:22.052 13.084 - 13.179: 98.9227% ( 1) 00:13:22.052 13.369 - 13.464: 98.9306% ( 1) 00:13:22.052 13.464 - 13.559: 98.9542% ( 3) 00:13:22.052 13.559 - 13.653: 98.9699% ( 2) 00:13:22.052 13.653 - 13.748: 98.9935% ( 3) 00:13:22.052 13.748 - 13.843: 99.0092% ( 2) 00:13:22.052 13.938 - 14.033: 99.0171% ( 1) 00:13:22.052 14.033 - 14.127: 99.0249% ( 1) 00:13:22.052 14.412 - 14.507: 99.0328% ( 1) 00:13:22.052 14.507 - 14.601: 99.0485% ( 2) 00:13:22.052 14.601 - 14.696: 99.0564% ( 1) 00:13:22.052 14.981 - 15.076: 99.0721% ( 2) 00:13:22.053 15.170 - 15.265: 99.0878% ( 2) 00:13:22.053 15.550 - 15.644: 99.0957% ( 1) 00:13:22.053 16.213 - 16.308: 99.1036% ( 1) 00:13:22.053 16.308 - 16.403: 99.1114% ( 1) 00:13:22.053 16.403 - 16.498: 99.1193% ( 1) 00:13:22.053 16.877 - 16.972: 99.1350% ( 2) 00:13:22.053 17.161 - 17.256: 99.1507% ( 2) 00:13:22.053 17.256 - 17.351: 99.1665% ( 2) 00:13:22.053 17.351 - 17.446: 99.1743% ( 1) 00:13:22.053 17.446 - 17.541: 99.2058% ( 4) 00:13:22.053 17.541 - 17.636: 99.2294% ( 3) 00:13:22.053 17.636 - 17.730: 99.2766% ( 6) 00:13:22.053 17.730 - 17.825: 99.3552% ( 10) 00:13:22.053 17.825 - 17.920: 99.3866% ( 4) 00:13:22.053 17.920 - 18.015: 99.4260% ( 5) 00:13:22.053 18.015 - 18.110: 99.4810% ( 7) 00:13:22.053 18.110 - 18.204: 99.5596% ( 10) 00:13:22.053 18.204 - 18.299: 99.5911% ( 4) 00:13:22.053 18.299 - 18.394: 99.6540% ( 8) 00:13:22.053 18.394 - 18.489: 99.6855% ( 4) 00:13:22.053 18.489 - 18.584: 99.7091% ( 3) 00:13:22.053 18.584 - 18.679: 99.7169% ( 1) 00:13:22.053 18.679 - 18.773: 99.7405% ( 3) 00:13:22.053 18.773 - 18.868: 99.7641% ( 3) 00:13:22.053 18.868 - 18.963: 99.7720% ( 1) 00:13:22.053 19.058 - 19.153: 99.7955% ( 3) 00:13:22.053 19.153 - 19.247: 99.8034% ( 1) 00:13:22.053 19.437 - 19.532: 99.8113% ( 1) 00:13:22.053 19.911 - 20.006: 99.8191% ( 1) 00:13:22.053 20.196 - 20.290: 99.8349% ( 2) 00:13:22.053 20.670 - 20.764: 99.8427% ( 1) 00:13:22.053 21.713 - 21.807: 99.8506% ( 1) 00:13:22.053 22.471 - 22.566: 99.8585% ( 1) 00:13:22.053 24.273 - 24.462: 99.8663% ( 1) 00:13:22.053 24.652 - 24.841: 99.8742% ( 1) 00:13:22.053 25.600 - 25.790: 99.8820% ( 1) 00:13:22.053 25.979 - 26.169: 99.8899% ( 1) 00:13:22.053 27.117 - 27.307: 99.8978% ( 1) 00:13:22.053 37.357 - 37.547: 99.9056% ( 1) 00:13:22.053 3980.705 - 4004.978: 99.9607% ( 7) 00:13:22.053 4004.978 - 4029.250: 100.0000% ( 5) 00:13:22.053 00:13:22.053 Complete histogram 00:13:22.053 ================== 00:13:22.053 Range in us Cumulative Count 00:13:22.053 2.062 - 2.074: 5.1978% ( 661) 00:13:22.053 2.074 - 2.086: 38.3031% ( 4210) 00:13:22.053 2.086 - 2.098: 43.2885% ( 634) 00:13:22.053 2.098 - 2.110: 48.7772% ( 698) 00:13:22.053 2.110 - 2.121: 55.9880% ( 917) 00:13:22.053 2.121 - 2.133: 57.5372% ( 197) 00:13:22.053 2.133 - 2.145: 65.0232% ( 952) 00:13:22.053 2.145 - 2.157: 76.1107% ( 1410) 00:13:22.053 2.157 - 2.169: 77.4239% ( 167) 00:13:22.053 2.169 - 2.181: 80.0425% ( 333) 00:13:22.053 2.181 - 2.193: 82.3150% ( 289) 00:13:22.053 2.193 - 2.204: 83.1407% ( 105) 00:13:22.053 2.204 - 2.216: 84.9650% ( 232) 00:13:22.053 2.216 - 2.228: 87.8509% ( 367) 00:13:22.053 2.228 - 2.240: 89.7224% ( 238) 00:13:22.053 2.240 - 2.252: 91.2008% ( 188) 00:13:22.053 2.252 - 2.264: 91.8220% ( 79) 00:13:22.053 2.264 - 2.276: 92.0107% ( 24) 00:13:22.053 2.276 - 2.287: 92.2073% ( 25) 00:13:22.053 2.287 - 2.299: 92.4825% ( 35) 00:13:22.053 2.299 - 2.311: 92.9543% ( 60) 00:13:22.053 2.311 - 2.323: 93.2531% ( 38) 00:13:22.053 2.323 - 2.335: 93.3160% ( 8) 00:13:22.053 2.335 - 2.347: 93.3318% ( 2) 00:13:22.053 2.347 - 2.359: 93.4340% ( 13) 00:13:22.053 2.359 - 2.370: 93.5362% ( 13) 00:13:22.053 2.370 - 2.382: 93.7092% ( 22) 00:13:22.053 2.382 - 2.394: 93.9923% ( 36) 00:13:22.053 2.394 - 2.406: 94.2125% ( 28) 00:13:22.053 2.406 - 2.418: 94.4405% ( 29) 00:13:22.053 2.418 - 2.430: 94.7393% ( 38) 00:13:22.053 2.430 - 2.441: 95.0617% ( 41) 00:13:22.053 2.441 - 2.453: 95.3291% ( 34) 00:13:22.053 2.453 - 2.465: 95.5807% ( 32) 00:13:22.053 2.465 - 2.477: 95.7852% ( 26) 00:13:22.053 2.477 - 2.489: 95.8953% ( 14) 00:13:22.053 2.489 - 2.501: 96.0211% ( 16) 00:13:22.053 2.501 - 2.513: 96.0997% ( 10) 00:13:22.053 2.513 - 2.524: 96.1705% ( 9) 00:13:22.053 2.524 - 2.536: 96.2255% ( 7) 00:13:22.053 2.536 - 2.548: 96.2648% ( 5) 00:13:22.053 2.548 - 2.560: 96.2806% ( 2) 00:13:22.053 2.560 - 2.572: 96.3120% ( 4) 00:13:22.053 2.572 - 2.584: 96.3278% ( 2) 00:13:22.053 2.619 - 2.631: 96.3592% ( 4) 00:13:22.053 2.631 - 2.643: 96.3828% ( 3) 00:13:22.053 2.643 - 2.655: 96.3985% ( 2) 00:13:22.053 2.655 - 2.667: 96.4142% ( 2) 00:13:22.053 2.667 - 2.679: 96.4221% ( 1) 00:13:22.053 2.679 - 2.690: 96.4378% ( 2) 00:13:22.053 2.690 - 2.702: 96.4457% ( 1) 00:13:22.053 2.702 - 2.714: 96.4536% ( 1) 00:13:22.053 2.714 - 2.726: 96.4929% ( 5) 00:13:22.053 2.738 - 2.750: 96.5086% ( 2) 00:13:22.053 2.750 - 2.761: 96.5243% ( 2) 00:13:22.053 2.761 - 2.773: 96.5479% ( 3) 00:13:22.053 2.785 - 2.797: 96.5558% ( 1) 00:13:22.053 2.797 - 2.809: 96.5637% ( 1) 00:13:22.053 2.809 - 2.821: 96.5872% ( 3) 00:13:22.053 2.821 - 2.833: 96.6030% ( 2) 00:13:22.053 2.833 - 2.844: 96.6108% ( 1) 00:13:22.053 2.844 - 2.856: 96.6187% ( 1) 00:13:22.053 2.856 - 2.868: 96.6266% ( 1) 00:13:22.053 2.868 - 2.880: 96.6423% ( 2) 00:13:22.053 2.880 - 2.892: 96.6659% ( 3) 00:13:22.053 2.892 - 2.904: 96.6816% ( 2) 00:13:22.053 2.904 - 2.916: 96.6895% ( 1) 00:13:22.053 2.916 - 2.927: 96.7288% ( 5) 00:13:22.053 2.927 - 2.939: 96.7602% ( 4) 00:13:22.053 2.939 - 2.951: 96.7996% ( 5) 00:13:22.053 2.951 - 2.963: 96.8153% ( 2) 00:13:22.053 2.963 - 2.975: 96.8546% ( 5) 00:13:22.053 2.975 - 2.987: 96.8939% ( 5) 00:13:22.053 2.987 - 2.999: 96.9096% ( 2) 00:13:22.053 2.999 - 3.010: 96.9568% ( 6) 00:13:22.053 3.010 - 3.022: 96.9883% ( 4) 00:13:22.053 3.022 - 3.034: 96.9961% ( 1) 00:13:22.053 3.034 - 3.058: 97.0748% ( 10) 00:13:22.053 3.058 - 3.081: 97.1770% ( 13) 00:13:22.053 3.081 - 3.105: 97.3028% ( 16) 00:13:22.053 3.105 - 3.129: 97.4208% ( 15) 00:13:22.053 3.129 - 3.153: 97.5230% ( 13) 00:13:22.053 3.153 - 3.176: 97.6174% ( 12) 00:13:22.053 3.176 - 3.200: 97.7275% ( 14) 00:13:22.053 3.200 - 3.224: 97.8218% ( 12) 00:13:22.053 3.224 - 3.247: 97.8926% ( 9) 00:13:22.053 3.247 - 3.271: 97.9398% ( 6) 00:13:22.053 3.271 - 3.295: 97.9634% ( 3) 00:13:22.053 3.295 - 3.319: 97.9791% ( 2) 00:13:22.053 3.319 - 3.342: 98.0105% ( 4) 00:13:22.053 3.342 - 3.366: 98.0734% ( 8) 00:13:22.053 3.366 - 3.390: 98.1285% ( 7) 00:13:22.053 3.390 - 3.413: 98.1678% ( 5) 00:13:22.053 3.413 - 3.437: 98.1757% ( 1) 00:13:22.053 3.437 - 3.461: 98.1835% ( 1) 00:13:22.053 3.461 - 3.484: 98.2071% ( 3) 00:13:22.053 3.484 - 3.508: 98.2386% ( 4) 00:13:22.053 3.508 - 3.532: 98.2464% ( 1) 00:13:22.053 3.532 - 3.556: 98.2700% ( 3) 00:13:22.053 3.556 - 3.579: 98.2858% ( 2) 00:13:22.053 3.603 - 3.627: 98.2936% ( 1) 00:13:22.053 3.650 - 3.674: 98.3015% ( 1) 00:13:22.053 3.674 - 3.698: 98.3093% ( 1) 00:13:22.053 3.793 - 3.816: 98.3172% ( 1) 00:13:22.053 3.840 - 3.864: 98.3408% ( 3) 00:13:22.053 3.864 - 3.887: 98.3565% ( 2) 00:13:22.053 3.911 - 3.935: 98.3644% ( 1) 00:13:22.053 3.959 - 3.982: 98.3801% ( 2) 00:13:22.053 3.982 - 4.006: 98.3880% ( 1) 00:13:22.053 4.030 - 4.053: 98.3958% ( 1) 00:13:22.053 4.053 - 4.077: 98.4037% ( 1) 00:13:22.053 4.219 - 4.243: 98.4116% ( 1) 00:13:22.053 4.314 - 4.338: 98.4194% ( 1) 00:13:22.053 4.883 - 4.907: 98.4273% ( 1) 00:13:22.053 5.025 - 5.049: 98.4352% ( 1) 00:13:22.053 5.997 - 6.021: 98.4430% ( 1) 00:13:22.053 6.044 - 6.068: 98.4509% ( 1) 00:13:22.053 6.068 - 6.116: 98.4588% ( 1) 00:13:22.053 6.210 - 6.258: 98.4666% ( 1) 00:13:22.053 6.590 - 6.637: 98.4823% ( 2) 00:13:22.053 6.684 - 6.732: 98.4902% ( 1) 00:13:22.054 6.827 - 6.874: 98.4981% ( 1) 00:13:22.054 6.874 - 6.921: 98.5295% ( 4) 00:13:22.054 6.921 - 6.969: 98.5531%[2024-11-15 12:35:01.988603] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:22.054 ( 3) 00:13:22.054 7.253 - 7.301: 98.5610% ( 1) 00:13:22.054 7.301 - 7.348: 98.5688% ( 1) 00:13:22.054 7.396 - 7.443: 98.5767% ( 1) 00:13:22.054 7.443 - 7.490: 98.6160% ( 5) 00:13:22.054 7.490 - 7.538: 98.6396% ( 3) 00:13:22.054 7.538 - 7.585: 98.6475% ( 1) 00:13:22.054 7.680 - 7.727: 98.6553% ( 1) 00:13:22.054 7.727 - 7.775: 98.6632% ( 1) 00:13:22.054 7.775 - 7.822: 98.6711% ( 1) 00:13:22.054 7.917 - 7.964: 98.6789% ( 1) 00:13:22.054 8.201 - 8.249: 98.6868% ( 1) 00:13:22.054 8.391 - 8.439: 98.6947% ( 1) 00:13:22.054 8.533 - 8.581: 98.7025% ( 1) 00:13:22.054 8.865 - 8.913: 98.7104% ( 1) 00:13:22.054 8.913 - 8.960: 98.7183% ( 1) 00:13:22.054 9.055 - 9.102: 98.7261% ( 1) 00:13:22.054 9.150 - 9.197: 98.7340% ( 1) 00:13:22.054 9.624 - 9.671: 98.7418% ( 1) 00:13:22.054 9.671 - 9.719: 98.7497% ( 1) 00:13:22.054 10.050 - 10.098: 98.7576% ( 1) 00:13:22.054 10.240 - 10.287: 98.7654% ( 1) 00:13:22.054 10.619 - 10.667: 98.7733% ( 1) 00:13:22.054 11.473 - 11.520: 98.7812% ( 1) 00:13:22.054 11.852 - 11.899: 98.7890% ( 1) 00:13:22.054 12.326 - 12.421: 98.7969% ( 1) 00:13:22.054 12.516 - 12.610: 98.8047% ( 1) 00:13:22.054 14.886 - 14.981: 98.8126% ( 1) 00:13:22.054 15.455 - 15.550: 98.8205% ( 1) 00:13:22.054 15.550 - 15.644: 98.8283% ( 1) 00:13:22.054 15.739 - 15.834: 98.8519% ( 3) 00:13:22.054 15.834 - 15.929: 98.8755% ( 3) 00:13:22.054 15.929 - 16.024: 98.8991% ( 3) 00:13:22.054 16.024 - 16.119: 98.9227% ( 3) 00:13:22.054 16.119 - 16.213: 98.9384% ( 2) 00:13:22.054 16.213 - 16.308: 98.9935% ( 7) 00:13:22.054 16.308 - 16.403: 99.0328% ( 5) 00:13:22.054 16.403 - 16.498: 99.0800% ( 6) 00:13:22.054 16.498 - 16.593: 99.0957% ( 2) 00:13:22.054 16.593 - 16.687: 99.1272% ( 4) 00:13:22.054 16.687 - 16.782: 99.1901% ( 8) 00:13:22.054 16.782 - 16.877: 99.2372% ( 6) 00:13:22.054 16.877 - 16.972: 99.2451% ( 1) 00:13:22.054 16.972 - 17.067: 99.2530% ( 1) 00:13:22.054 17.161 - 17.256: 99.2608% ( 1) 00:13:22.054 17.256 - 17.351: 99.2766% ( 2) 00:13:22.054 17.351 - 17.446: 99.2844% ( 1) 00:13:22.054 17.446 - 17.541: 99.3159% ( 4) 00:13:22.054 18.204 - 18.299: 99.3237% ( 1) 00:13:22.054 18.299 - 18.394: 99.3316% ( 1) 00:13:22.054 29.582 - 29.772: 99.3395% ( 1) 00:13:22.054 3980.705 - 4004.978: 99.8191% ( 61) 00:13:22.054 4004.978 - 4029.250: 99.9843% ( 21) 00:13:22.054 4029.250 - 4053.523: 100.0000% ( 2) 00:13:22.054 00:13:22.054 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:22.054 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:22.054 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:22.054 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:22.054 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:22.054 [ 00:13:22.054 { 00:13:22.054 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:22.054 "subtype": "Discovery", 00:13:22.054 "listen_addresses": [], 00:13:22.054 "allow_any_host": true, 00:13:22.054 "hosts": [] 00:13:22.054 }, 00:13:22.054 { 00:13:22.054 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:22.054 "subtype": "NVMe", 00:13:22.054 "listen_addresses": [ 00:13:22.054 { 00:13:22.054 "trtype": "VFIOUSER", 00:13:22.054 "adrfam": "IPv4", 00:13:22.054 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:22.054 "trsvcid": "0" 00:13:22.054 } 00:13:22.054 ], 00:13:22.054 "allow_any_host": true, 00:13:22.054 "hosts": [], 00:13:22.054 "serial_number": "SPDK1", 00:13:22.054 "model_number": "SPDK bdev Controller", 00:13:22.054 "max_namespaces": 32, 00:13:22.054 "min_cntlid": 1, 00:13:22.054 "max_cntlid": 65519, 00:13:22.054 "namespaces": [ 00:13:22.054 { 00:13:22.054 "nsid": 1, 00:13:22.054 "bdev_name": "Malloc1", 00:13:22.054 "name": "Malloc1", 00:13:22.054 "nguid": "6F78C57886164694840234139254679A", 00:13:22.054 "uuid": "6f78c578-8616-4694-8402-34139254679a" 00:13:22.054 } 00:13:22.054 ] 00:13:22.054 }, 00:13:22.054 { 00:13:22.054 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:22.054 "subtype": "NVMe", 00:13:22.054 "listen_addresses": [ 00:13:22.054 { 00:13:22.054 "trtype": "VFIOUSER", 00:13:22.054 "adrfam": "IPv4", 00:13:22.054 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:22.054 "trsvcid": "0" 00:13:22.054 } 00:13:22.054 ], 00:13:22.054 "allow_any_host": true, 00:13:22.054 "hosts": [], 00:13:22.054 "serial_number": "SPDK2", 00:13:22.054 "model_number": "SPDK bdev Controller", 00:13:22.054 "max_namespaces": 32, 00:13:22.054 "min_cntlid": 1, 00:13:22.054 "max_cntlid": 65519, 00:13:22.054 "namespaces": [ 00:13:22.054 { 00:13:22.054 "nsid": 1, 00:13:22.054 "bdev_name": "Malloc2", 00:13:22.054 "name": "Malloc2", 00:13:22.054 "nguid": "1FF5518EF7BE4601840BBEE2A3C08874", 00:13:22.054 "uuid": "1ff5518e-f7be-4601-840b-bee2a3c08874" 00:13:22.054 } 00:13:22.054 ] 00:13:22.054 } 00:13:22.054 ] 00:13:22.054 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:22.054 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=998631 00:13:22.054 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:22.054 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:22.054 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:22.054 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:22.054 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:22.054 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:22.054 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:22.054 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:22.334 [2024-11-15 12:35:02.495024] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:22.334 Malloc3 00:13:22.334 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:22.614 [2024-11-15 12:35:02.896980] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:22.614 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:22.614 Asynchronous Event Request test 00:13:22.614 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.614 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.614 Registering asynchronous event callbacks... 00:13:22.614 Starting namespace attribute notice tests for all controllers... 00:13:22.614 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:22.614 aer_cb - Changed Namespace 00:13:22.614 Cleaning up... 00:13:22.872 [ 00:13:22.872 { 00:13:22.872 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:22.872 "subtype": "Discovery", 00:13:22.872 "listen_addresses": [], 00:13:22.872 "allow_any_host": true, 00:13:22.872 "hosts": [] 00:13:22.872 }, 00:13:22.872 { 00:13:22.872 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:22.872 "subtype": "NVMe", 00:13:22.872 "listen_addresses": [ 00:13:22.872 { 00:13:22.872 "trtype": "VFIOUSER", 00:13:22.872 "adrfam": "IPv4", 00:13:22.872 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:22.872 "trsvcid": "0" 00:13:22.872 } 00:13:22.872 ], 00:13:22.872 "allow_any_host": true, 00:13:22.872 "hosts": [], 00:13:22.872 "serial_number": "SPDK1", 00:13:22.872 "model_number": "SPDK bdev Controller", 00:13:22.872 "max_namespaces": 32, 00:13:22.872 "min_cntlid": 1, 00:13:22.872 "max_cntlid": 65519, 00:13:22.872 "namespaces": [ 00:13:22.872 { 00:13:22.872 "nsid": 1, 00:13:22.872 "bdev_name": "Malloc1", 00:13:22.872 "name": "Malloc1", 00:13:22.872 "nguid": "6F78C57886164694840234139254679A", 00:13:22.872 "uuid": "6f78c578-8616-4694-8402-34139254679a" 00:13:22.872 }, 00:13:22.872 { 00:13:22.872 "nsid": 2, 00:13:22.872 "bdev_name": "Malloc3", 00:13:22.872 "name": "Malloc3", 00:13:22.872 "nguid": "17A8558AA1A148B0995105393D1753D2", 00:13:22.872 "uuid": "17a8558a-a1a1-48b0-9951-05393d1753d2" 00:13:22.872 } 00:13:22.872 ] 00:13:22.873 }, 00:13:22.873 { 00:13:22.873 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:22.873 "subtype": "NVMe", 00:13:22.873 "listen_addresses": [ 00:13:22.873 { 00:13:22.873 "trtype": "VFIOUSER", 00:13:22.873 "adrfam": "IPv4", 00:13:22.873 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:22.873 "trsvcid": "0" 00:13:22.873 } 00:13:22.873 ], 00:13:22.873 "allow_any_host": true, 00:13:22.873 "hosts": [], 00:13:22.873 "serial_number": "SPDK2", 00:13:22.873 "model_number": "SPDK bdev Controller", 00:13:22.873 "max_namespaces": 32, 00:13:22.873 "min_cntlid": 1, 00:13:22.873 "max_cntlid": 65519, 00:13:22.873 "namespaces": [ 00:13:22.873 { 00:13:22.873 "nsid": 1, 00:13:22.873 "bdev_name": "Malloc2", 00:13:22.873 "name": "Malloc2", 00:13:22.873 "nguid": "1FF5518EF7BE4601840BBEE2A3C08874", 00:13:22.873 "uuid": "1ff5518e-f7be-4601-840b-bee2a3c08874" 00:13:22.873 } 00:13:22.873 ] 00:13:22.873 } 00:13:22.873 ] 00:13:22.873 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 998631 00:13:22.873 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:22.873 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:22.873 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:22.873 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:22.873 [2024-11-15 12:35:03.197332] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:13:22.873 [2024-11-15 12:35:03.197377] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid998768 ] 00:13:23.133 [2024-11-15 12:35:03.248781] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:23.133 [2024-11-15 12:35:03.253995] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:23.133 [2024-11-15 12:35:03.254045] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7eff233ff000 00:13:23.133 [2024-11-15 12:35:03.254990] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.133 [2024-11-15 12:35:03.256012] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.133 [2024-11-15 12:35:03.259743] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.133 [2024-11-15 12:35:03.260017] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:23.133 [2024-11-15 12:35:03.261025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:23.133 [2024-11-15 12:35:03.262045] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.133 [2024-11-15 12:35:03.263053] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:23.133 [2024-11-15 12:35:03.264059] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.133 [2024-11-15 12:35:03.265068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:23.133 [2024-11-15 12:35:03.265101] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7eff233f4000 00:13:23.133 [2024-11-15 12:35:03.266216] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:23.133 [2024-11-15 12:35:03.283872] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:23.133 [2024-11-15 12:35:03.283909] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:23.133 [2024-11-15 12:35:03.285995] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:23.133 [2024-11-15 12:35:03.286062] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:23.133 [2024-11-15 12:35:03.286147] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:23.133 [2024-11-15 12:35:03.286169] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:23.133 [2024-11-15 12:35:03.286180] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:23.133 [2024-11-15 12:35:03.287001] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:23.133 [2024-11-15 12:35:03.287027] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:23.133 [2024-11-15 12:35:03.287042] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:23.133 [2024-11-15 12:35:03.288024] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:23.133 [2024-11-15 12:35:03.288046] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:23.133 [2024-11-15 12:35:03.288060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:23.133 [2024-11-15 12:35:03.289021] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:23.133 [2024-11-15 12:35:03.289056] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:23.133 [2024-11-15 12:35:03.290050] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:23.133 [2024-11-15 12:35:03.290070] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:23.133 [2024-11-15 12:35:03.290079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:23.133 [2024-11-15 12:35:03.290091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:23.133 [2024-11-15 12:35:03.290201] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:23.134 [2024-11-15 12:35:03.290210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:23.134 [2024-11-15 12:35:03.290217] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:23.134 [2024-11-15 12:35:03.291049] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:23.134 [2024-11-15 12:35:03.292054] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:23.134 [2024-11-15 12:35:03.293076] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:23.134 [2024-11-15 12:35:03.294054] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:23.134 [2024-11-15 12:35:03.294135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:23.134 [2024-11-15 12:35:03.295074] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:23.134 [2024-11-15 12:35:03.295095] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:23.134 [2024-11-15 12:35:03.295105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.295130] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:23.134 [2024-11-15 12:35:03.295149] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.295174] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:23.134 [2024-11-15 12:35:03.295185] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.134 [2024-11-15 12:35:03.295192] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.134 [2024-11-15 12:35:03.295210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.134 [2024-11-15 12:35:03.302747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:23.134 [2024-11-15 12:35:03.302770] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:23.134 [2024-11-15 12:35:03.302780] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:23.134 [2024-11-15 12:35:03.302787] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:23.134 [2024-11-15 12:35:03.302795] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:23.134 [2024-11-15 12:35:03.302808] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:23.134 [2024-11-15 12:35:03.302817] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:23.134 [2024-11-15 12:35:03.302825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.302840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.302857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:23.134 [2024-11-15 12:35:03.310728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:23.134 [2024-11-15 12:35:03.310763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.134 [2024-11-15 12:35:03.310777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.134 [2024-11-15 12:35:03.310789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.134 [2024-11-15 12:35:03.310802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.134 [2024-11-15 12:35:03.310811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.310823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.310837] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:23.134 [2024-11-15 12:35:03.318743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:23.134 [2024-11-15 12:35:03.318775] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:23.134 [2024-11-15 12:35:03.318786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.318799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.318812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.318827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:23.134 [2024-11-15 12:35:03.326747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:23.134 [2024-11-15 12:35:03.326833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.326851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.326865] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:23.134 [2024-11-15 12:35:03.326875] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:23.134 [2024-11-15 12:35:03.326881] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.134 [2024-11-15 12:35:03.326891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:23.134 [2024-11-15 12:35:03.334745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:23.134 [2024-11-15 12:35:03.334768] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:23.134 [2024-11-15 12:35:03.334787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.334803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.334817] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:23.134 [2024-11-15 12:35:03.334826] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.134 [2024-11-15 12:35:03.334833] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.134 [2024-11-15 12:35:03.334843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.134 [2024-11-15 12:35:03.342742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:23.134 [2024-11-15 12:35:03.342773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.342790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.342805] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:23.134 [2024-11-15 12:35:03.342814] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.134 [2024-11-15 12:35:03.342820] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.134 [2024-11-15 12:35:03.342830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.134 [2024-11-15 12:35:03.350743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:23.134 [2024-11-15 12:35:03.350776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.350794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.350808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.350819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.350828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.350837] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.350845] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:23.134 [2024-11-15 12:35:03.350853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:23.134 [2024-11-15 12:35:03.350862] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:23.134 [2024-11-15 12:35:03.350886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:23.134 [2024-11-15 12:35:03.358744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:23.134 [2024-11-15 12:35:03.358771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:23.134 [2024-11-15 12:35:03.366733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:23.134 [2024-11-15 12:35:03.366769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:23.134 [2024-11-15 12:35:03.374742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:23.134 [2024-11-15 12:35:03.374768] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:23.134 [2024-11-15 12:35:03.382733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:23.134 [2024-11-15 12:35:03.382764] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:23.135 [2024-11-15 12:35:03.382775] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:23.135 [2024-11-15 12:35:03.382782] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:23.135 [2024-11-15 12:35:03.382788] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:23.135 [2024-11-15 12:35:03.382793] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:23.135 [2024-11-15 12:35:03.382803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:23.135 [2024-11-15 12:35:03.382816] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:23.135 [2024-11-15 12:35:03.382824] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:23.135 [2024-11-15 12:35:03.382830] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.135 [2024-11-15 12:35:03.382839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:23.135 [2024-11-15 12:35:03.382855] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:23.135 [2024-11-15 12:35:03.382864] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.135 [2024-11-15 12:35:03.382870] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.135 [2024-11-15 12:35:03.382879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.135 [2024-11-15 12:35:03.382892] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:23.135 [2024-11-15 12:35:03.382900] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:23.135 [2024-11-15 12:35:03.382906] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.135 [2024-11-15 12:35:03.382915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:23.135 [2024-11-15 12:35:03.390733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:23.135 [2024-11-15 12:35:03.390761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:23.135 [2024-11-15 12:35:03.390780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:23.135 [2024-11-15 12:35:03.390792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:23.135 ===================================================== 00:13:23.135 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:23.135 ===================================================== 00:13:23.135 Controller Capabilities/Features 00:13:23.135 ================================ 00:13:23.135 Vendor ID: 4e58 00:13:23.135 Subsystem Vendor ID: 4e58 00:13:23.135 Serial Number: SPDK2 00:13:23.135 Model Number: SPDK bdev Controller 00:13:23.135 Firmware Version: 25.01 00:13:23.135 Recommended Arb Burst: 6 00:13:23.135 IEEE OUI Identifier: 8d 6b 50 00:13:23.135 Multi-path I/O 00:13:23.135 May have multiple subsystem ports: Yes 00:13:23.135 May have multiple controllers: Yes 00:13:23.135 Associated with SR-IOV VF: No 00:13:23.135 Max Data Transfer Size: 131072 00:13:23.135 Max Number of Namespaces: 32 00:13:23.135 Max Number of I/O Queues: 127 00:13:23.135 NVMe Specification Version (VS): 1.3 00:13:23.135 NVMe Specification Version (Identify): 1.3 00:13:23.135 Maximum Queue Entries: 256 00:13:23.135 Contiguous Queues Required: Yes 00:13:23.135 Arbitration Mechanisms Supported 00:13:23.135 Weighted Round Robin: Not Supported 00:13:23.135 Vendor Specific: Not Supported 00:13:23.135 Reset Timeout: 15000 ms 00:13:23.135 Doorbell Stride: 4 bytes 00:13:23.135 NVM Subsystem Reset: Not Supported 00:13:23.135 Command Sets Supported 00:13:23.135 NVM Command Set: Supported 00:13:23.135 Boot Partition: Not Supported 00:13:23.135 Memory Page Size Minimum: 4096 bytes 00:13:23.135 Memory Page Size Maximum: 4096 bytes 00:13:23.135 Persistent Memory Region: Not Supported 00:13:23.135 Optional Asynchronous Events Supported 00:13:23.135 Namespace Attribute Notices: Supported 00:13:23.135 Firmware Activation Notices: Not Supported 00:13:23.135 ANA Change Notices: Not Supported 00:13:23.135 PLE Aggregate Log Change Notices: Not Supported 00:13:23.135 LBA Status Info Alert Notices: Not Supported 00:13:23.135 EGE Aggregate Log Change Notices: Not Supported 00:13:23.135 Normal NVM Subsystem Shutdown event: Not Supported 00:13:23.135 Zone Descriptor Change Notices: Not Supported 00:13:23.135 Discovery Log Change Notices: Not Supported 00:13:23.135 Controller Attributes 00:13:23.135 128-bit Host Identifier: Supported 00:13:23.135 Non-Operational Permissive Mode: Not Supported 00:13:23.135 NVM Sets: Not Supported 00:13:23.135 Read Recovery Levels: Not Supported 00:13:23.135 Endurance Groups: Not Supported 00:13:23.135 Predictable Latency Mode: Not Supported 00:13:23.135 Traffic Based Keep ALive: Not Supported 00:13:23.135 Namespace Granularity: Not Supported 00:13:23.135 SQ Associations: Not Supported 00:13:23.135 UUID List: Not Supported 00:13:23.135 Multi-Domain Subsystem: Not Supported 00:13:23.135 Fixed Capacity Management: Not Supported 00:13:23.135 Variable Capacity Management: Not Supported 00:13:23.135 Delete Endurance Group: Not Supported 00:13:23.135 Delete NVM Set: Not Supported 00:13:23.135 Extended LBA Formats Supported: Not Supported 00:13:23.135 Flexible Data Placement Supported: Not Supported 00:13:23.135 00:13:23.135 Controller Memory Buffer Support 00:13:23.135 ================================ 00:13:23.135 Supported: No 00:13:23.135 00:13:23.135 Persistent Memory Region Support 00:13:23.135 ================================ 00:13:23.135 Supported: No 00:13:23.135 00:13:23.135 Admin Command Set Attributes 00:13:23.135 ============================ 00:13:23.135 Security Send/Receive: Not Supported 00:13:23.135 Format NVM: Not Supported 00:13:23.135 Firmware Activate/Download: Not Supported 00:13:23.135 Namespace Management: Not Supported 00:13:23.135 Device Self-Test: Not Supported 00:13:23.135 Directives: Not Supported 00:13:23.135 NVMe-MI: Not Supported 00:13:23.135 Virtualization Management: Not Supported 00:13:23.135 Doorbell Buffer Config: Not Supported 00:13:23.135 Get LBA Status Capability: Not Supported 00:13:23.135 Command & Feature Lockdown Capability: Not Supported 00:13:23.135 Abort Command Limit: 4 00:13:23.135 Async Event Request Limit: 4 00:13:23.135 Number of Firmware Slots: N/A 00:13:23.135 Firmware Slot 1 Read-Only: N/A 00:13:23.135 Firmware Activation Without Reset: N/A 00:13:23.135 Multiple Update Detection Support: N/A 00:13:23.135 Firmware Update Granularity: No Information Provided 00:13:23.135 Per-Namespace SMART Log: No 00:13:23.135 Asymmetric Namespace Access Log Page: Not Supported 00:13:23.135 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:23.135 Command Effects Log Page: Supported 00:13:23.135 Get Log Page Extended Data: Supported 00:13:23.135 Telemetry Log Pages: Not Supported 00:13:23.135 Persistent Event Log Pages: Not Supported 00:13:23.135 Supported Log Pages Log Page: May Support 00:13:23.135 Commands Supported & Effects Log Page: Not Supported 00:13:23.135 Feature Identifiers & Effects Log Page:May Support 00:13:23.135 NVMe-MI Commands & Effects Log Page: May Support 00:13:23.135 Data Area 4 for Telemetry Log: Not Supported 00:13:23.135 Error Log Page Entries Supported: 128 00:13:23.135 Keep Alive: Supported 00:13:23.135 Keep Alive Granularity: 10000 ms 00:13:23.135 00:13:23.135 NVM Command Set Attributes 00:13:23.135 ========================== 00:13:23.135 Submission Queue Entry Size 00:13:23.135 Max: 64 00:13:23.135 Min: 64 00:13:23.135 Completion Queue Entry Size 00:13:23.135 Max: 16 00:13:23.135 Min: 16 00:13:23.135 Number of Namespaces: 32 00:13:23.135 Compare Command: Supported 00:13:23.135 Write Uncorrectable Command: Not Supported 00:13:23.135 Dataset Management Command: Supported 00:13:23.135 Write Zeroes Command: Supported 00:13:23.135 Set Features Save Field: Not Supported 00:13:23.135 Reservations: Not Supported 00:13:23.135 Timestamp: Not Supported 00:13:23.135 Copy: Supported 00:13:23.135 Volatile Write Cache: Present 00:13:23.135 Atomic Write Unit (Normal): 1 00:13:23.135 Atomic Write Unit (PFail): 1 00:13:23.135 Atomic Compare & Write Unit: 1 00:13:23.135 Fused Compare & Write: Supported 00:13:23.135 Scatter-Gather List 00:13:23.135 SGL Command Set: Supported (Dword aligned) 00:13:23.135 SGL Keyed: Not Supported 00:13:23.135 SGL Bit Bucket Descriptor: Not Supported 00:13:23.135 SGL Metadata Pointer: Not Supported 00:13:23.135 Oversized SGL: Not Supported 00:13:23.135 SGL Metadata Address: Not Supported 00:13:23.135 SGL Offset: Not Supported 00:13:23.135 Transport SGL Data Block: Not Supported 00:13:23.135 Replay Protected Memory Block: Not Supported 00:13:23.135 00:13:23.135 Firmware Slot Information 00:13:23.135 ========================= 00:13:23.135 Active slot: 1 00:13:23.135 Slot 1 Firmware Revision: 25.01 00:13:23.135 00:13:23.135 00:13:23.135 Commands Supported and Effects 00:13:23.135 ============================== 00:13:23.135 Admin Commands 00:13:23.135 -------------- 00:13:23.135 Get Log Page (02h): Supported 00:13:23.135 Identify (06h): Supported 00:13:23.135 Abort (08h): Supported 00:13:23.136 Set Features (09h): Supported 00:13:23.136 Get Features (0Ah): Supported 00:13:23.136 Asynchronous Event Request (0Ch): Supported 00:13:23.136 Keep Alive (18h): Supported 00:13:23.136 I/O Commands 00:13:23.136 ------------ 00:13:23.136 Flush (00h): Supported LBA-Change 00:13:23.136 Write (01h): Supported LBA-Change 00:13:23.136 Read (02h): Supported 00:13:23.136 Compare (05h): Supported 00:13:23.136 Write Zeroes (08h): Supported LBA-Change 00:13:23.136 Dataset Management (09h): Supported LBA-Change 00:13:23.136 Copy (19h): Supported LBA-Change 00:13:23.136 00:13:23.136 Error Log 00:13:23.136 ========= 00:13:23.136 00:13:23.136 Arbitration 00:13:23.136 =========== 00:13:23.136 Arbitration Burst: 1 00:13:23.136 00:13:23.136 Power Management 00:13:23.136 ================ 00:13:23.136 Number of Power States: 1 00:13:23.136 Current Power State: Power State #0 00:13:23.136 Power State #0: 00:13:23.136 Max Power: 0.00 W 00:13:23.136 Non-Operational State: Operational 00:13:23.136 Entry Latency: Not Reported 00:13:23.136 Exit Latency: Not Reported 00:13:23.136 Relative Read Throughput: 0 00:13:23.136 Relative Read Latency: 0 00:13:23.136 Relative Write Throughput: 0 00:13:23.136 Relative Write Latency: 0 00:13:23.136 Idle Power: Not Reported 00:13:23.136 Active Power: Not Reported 00:13:23.136 Non-Operational Permissive Mode: Not Supported 00:13:23.136 00:13:23.136 Health Information 00:13:23.136 ================== 00:13:23.136 Critical Warnings: 00:13:23.136 Available Spare Space: OK 00:13:23.136 Temperature: OK 00:13:23.136 Device Reliability: OK 00:13:23.136 Read Only: No 00:13:23.136 Volatile Memory Backup: OK 00:13:23.136 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:23.136 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:23.136 Available Spare: 0% 00:13:23.136 Available Sp[2024-11-15 12:35:03.390922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:23.136 [2024-11-15 12:35:03.398733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:23.136 [2024-11-15 12:35:03.398786] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:23.136 [2024-11-15 12:35:03.398804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.136 [2024-11-15 12:35:03.398815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.136 [2024-11-15 12:35:03.398825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.136 [2024-11-15 12:35:03.398835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.136 [2024-11-15 12:35:03.398917] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:23.136 [2024-11-15 12:35:03.398939] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:23.136 [2024-11-15 12:35:03.399917] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:23.136 [2024-11-15 12:35:03.400005] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:23.136 [2024-11-15 12:35:03.400035] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:23.136 [2024-11-15 12:35:03.400922] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:23.136 [2024-11-15 12:35:03.400947] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:23.136 [2024-11-15 12:35:03.401026] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:23.136 [2024-11-15 12:35:03.402199] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:23.136 are Threshold: 0% 00:13:23.136 Life Percentage Used: 0% 00:13:23.136 Data Units Read: 0 00:13:23.136 Data Units Written: 0 00:13:23.136 Host Read Commands: 0 00:13:23.136 Host Write Commands: 0 00:13:23.136 Controller Busy Time: 0 minutes 00:13:23.136 Power Cycles: 0 00:13:23.136 Power On Hours: 0 hours 00:13:23.136 Unsafe Shutdowns: 0 00:13:23.136 Unrecoverable Media Errors: 0 00:13:23.136 Lifetime Error Log Entries: 0 00:13:23.136 Warning Temperature Time: 0 minutes 00:13:23.136 Critical Temperature Time: 0 minutes 00:13:23.136 00:13:23.136 Number of Queues 00:13:23.136 ================ 00:13:23.136 Number of I/O Submission Queues: 127 00:13:23.136 Number of I/O Completion Queues: 127 00:13:23.136 00:13:23.136 Active Namespaces 00:13:23.136 ================= 00:13:23.136 Namespace ID:1 00:13:23.136 Error Recovery Timeout: Unlimited 00:13:23.136 Command Set Identifier: NVM (00h) 00:13:23.136 Deallocate: Supported 00:13:23.136 Deallocated/Unwritten Error: Not Supported 00:13:23.136 Deallocated Read Value: Unknown 00:13:23.136 Deallocate in Write Zeroes: Not Supported 00:13:23.136 Deallocated Guard Field: 0xFFFF 00:13:23.136 Flush: Supported 00:13:23.136 Reservation: Supported 00:13:23.136 Namespace Sharing Capabilities: Multiple Controllers 00:13:23.136 Size (in LBAs): 131072 (0GiB) 00:13:23.136 Capacity (in LBAs): 131072 (0GiB) 00:13:23.136 Utilization (in LBAs): 131072 (0GiB) 00:13:23.136 NGUID: 1FF5518EF7BE4601840BBEE2A3C08874 00:13:23.136 UUID: 1ff5518e-f7be-4601-840b-bee2a3c08874 00:13:23.136 Thin Provisioning: Not Supported 00:13:23.136 Per-NS Atomic Units: Yes 00:13:23.136 Atomic Boundary Size (Normal): 0 00:13:23.136 Atomic Boundary Size (PFail): 0 00:13:23.136 Atomic Boundary Offset: 0 00:13:23.136 Maximum Single Source Range Length: 65535 00:13:23.136 Maximum Copy Length: 65535 00:13:23.136 Maximum Source Range Count: 1 00:13:23.136 NGUID/EUI64 Never Reused: No 00:13:23.136 Namespace Write Protected: No 00:13:23.136 Number of LBA Formats: 1 00:13:23.136 Current LBA Format: LBA Format #00 00:13:23.136 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:23.136 00:13:23.136 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:23.394 [2024-11-15 12:35:03.648592] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:28.657 Initializing NVMe Controllers 00:13:28.657 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:28.657 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:28.657 Initialization complete. Launching workers. 00:13:28.657 ======================================================== 00:13:28.657 Latency(us) 00:13:28.657 Device Information : IOPS MiB/s Average min max 00:13:28.657 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33279.89 130.00 3845.32 1180.76 7534.44 00:13:28.657 ======================================================== 00:13:28.657 Total : 33279.89 130.00 3845.32 1180.76 7534.44 00:13:28.657 00:13:28.657 [2024-11-15 12:35:08.753074] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:28.657 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:28.915 [2024-11-15 12:35:09.019828] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:34.181 Initializing NVMe Controllers 00:13:34.181 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:34.181 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:34.181 Initialization complete. Launching workers. 00:13:34.181 ======================================================== 00:13:34.181 Latency(us) 00:13:34.181 Device Information : IOPS MiB/s Average min max 00:13:34.181 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30168.99 117.85 4242.93 1245.35 7656.59 00:13:34.181 ======================================================== 00:13:34.181 Total : 30168.99 117.85 4242.93 1245.35 7656.59 00:13:34.181 00:13:34.181 [2024-11-15 12:35:14.042866] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:34.181 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:34.181 [2024-11-15 12:35:14.265790] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:39.441 [2024-11-15 12:35:19.398873] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:39.441 Initializing NVMe Controllers 00:13:39.441 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:39.441 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:39.441 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:39.441 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:39.441 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:39.441 Initialization complete. Launching workers. 00:13:39.441 Starting thread on core 2 00:13:39.441 Starting thread on core 3 00:13:39.441 Starting thread on core 1 00:13:39.441 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:39.441 [2024-11-15 12:35:19.731206] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:42.723 [2024-11-15 12:35:22.787731] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:42.723 Initializing NVMe Controllers 00:13:42.723 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:42.723 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:42.723 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:42.723 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:42.723 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:42.723 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:42.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:42.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:42.723 Initialization complete. Launching workers. 00:13:42.723 Starting thread on core 1 with urgent priority queue 00:13:42.723 Starting thread on core 2 with urgent priority queue 00:13:42.723 Starting thread on core 3 with urgent priority queue 00:13:42.723 Starting thread on core 0 with urgent priority queue 00:13:42.723 SPDK bdev Controller (SPDK2 ) core 0: 5869.67 IO/s 17.04 secs/100000 ios 00:13:42.723 SPDK bdev Controller (SPDK2 ) core 1: 5513.67 IO/s 18.14 secs/100000 ios 00:13:42.723 SPDK bdev Controller (SPDK2 ) core 2: 5874.67 IO/s 17.02 secs/100000 ios 00:13:42.723 SPDK bdev Controller (SPDK2 ) core 3: 6079.33 IO/s 16.45 secs/100000 ios 00:13:42.723 ======================================================== 00:13:42.723 00:13:42.723 12:35:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:42.980 [2024-11-15 12:35:23.097232] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:42.980 Initializing NVMe Controllers 00:13:42.980 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:42.981 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:42.981 Namespace ID: 1 size: 0GB 00:13:42.981 Initialization complete. 00:13:42.981 INFO: using host memory buffer for IO 00:13:42.981 Hello world! 00:13:42.981 [2024-11-15 12:35:23.107418] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:42.981 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:43.238 [2024-11-15 12:35:23.410174] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:44.172 Initializing NVMe Controllers 00:13:44.172 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:44.172 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:44.172 Initialization complete. Launching workers. 00:13:44.172 submit (in ns) avg, min, max = 7710.8, 3486.7, 4015648.9 00:13:44.172 complete (in ns) avg, min, max = 29299.6, 2055.6, 4034802.2 00:13:44.172 00:13:44.172 Submit histogram 00:13:44.172 ================ 00:13:44.172 Range in us Cumulative Count 00:13:44.172 3.484 - 3.508: 0.5226% ( 67) 00:13:44.172 3.508 - 3.532: 1.4431% ( 118) 00:13:44.172 3.532 - 3.556: 4.4696% ( 388) 00:13:44.172 3.556 - 3.579: 9.4618% ( 640) 00:13:44.172 3.579 - 3.603: 17.2465% ( 998) 00:13:44.172 3.603 - 3.627: 25.5304% ( 1062) 00:13:44.172 3.627 - 3.650: 33.9548% ( 1080) 00:13:44.172 3.650 - 3.674: 40.4524% ( 833) 00:13:44.172 3.674 - 3.698: 47.4103% ( 892) 00:13:44.172 3.698 - 3.721: 53.3151% ( 757) 00:13:44.172 3.721 - 3.745: 57.3245% ( 514) 00:13:44.172 3.745 - 3.769: 60.9828% ( 469) 00:13:44.172 3.769 - 3.793: 64.1966% ( 412) 00:13:44.172 3.793 - 3.816: 68.0421% ( 493) 00:13:44.172 3.816 - 3.840: 72.3713% ( 555) 00:13:44.172 3.840 - 3.864: 76.3495% ( 510) 00:13:44.172 3.864 - 3.887: 79.8986% ( 455) 00:13:44.172 3.887 - 3.911: 83.0265% ( 401) 00:13:44.172 3.911 - 3.935: 85.6474% ( 336) 00:13:44.172 3.935 - 3.959: 87.5429% ( 243) 00:13:44.172 3.959 - 3.982: 89.1732% ( 209) 00:13:44.172 3.982 - 4.006: 90.5538% ( 177) 00:13:44.172 4.006 - 4.030: 91.6693% ( 143) 00:13:44.172 4.030 - 4.053: 92.6599% ( 127) 00:13:44.172 4.053 - 4.077: 93.6661% ( 129) 00:13:44.172 4.077 - 4.101: 94.2980% ( 81) 00:13:44.172 4.101 - 4.124: 94.9688% ( 86) 00:13:44.172 4.124 - 4.148: 95.4056% ( 56) 00:13:44.172 4.148 - 4.172: 95.7098% ( 39) 00:13:44.172 4.172 - 4.196: 95.8970% ( 24) 00:13:44.172 4.196 - 4.219: 96.0530% ( 20) 00:13:44.172 4.219 - 4.243: 96.2402% ( 24) 00:13:44.172 4.243 - 4.267: 96.4041% ( 21) 00:13:44.172 4.267 - 4.290: 96.4977% ( 12) 00:13:44.172 4.290 - 4.314: 96.5757% ( 10) 00:13:44.172 4.314 - 4.338: 96.6303% ( 7) 00:13:44.172 4.338 - 4.361: 96.7161% ( 11) 00:13:44.172 4.361 - 4.385: 96.7473% ( 4) 00:13:44.172 4.385 - 4.409: 96.7863% ( 5) 00:13:44.172 4.409 - 4.433: 96.8175% ( 4) 00:13:44.172 4.433 - 4.456: 96.8331% ( 2) 00:13:44.172 4.456 - 4.480: 96.8487% ( 2) 00:13:44.172 4.480 - 4.504: 96.8643% ( 2) 00:13:44.172 4.504 - 4.527: 96.9111% ( 6) 00:13:44.172 4.551 - 4.575: 96.9189% ( 1) 00:13:44.172 4.575 - 4.599: 96.9267% ( 1) 00:13:44.172 4.599 - 4.622: 96.9657% ( 5) 00:13:44.172 4.622 - 4.646: 97.0281% ( 8) 00:13:44.172 4.646 - 4.670: 97.0671% ( 5) 00:13:44.172 4.670 - 4.693: 97.0983% ( 4) 00:13:44.172 4.693 - 4.717: 97.1217% ( 3) 00:13:44.172 4.717 - 4.741: 97.1763% ( 7) 00:13:44.172 4.741 - 4.764: 97.2465% ( 9) 00:13:44.172 4.764 - 4.788: 97.2777% ( 4) 00:13:44.172 4.788 - 4.812: 97.3089% ( 4) 00:13:44.172 4.812 - 4.836: 97.3791% ( 9) 00:13:44.172 4.836 - 4.859: 97.4025% ( 3) 00:13:44.172 4.859 - 4.883: 97.4493% ( 6) 00:13:44.172 4.883 - 4.907: 97.4649% ( 2) 00:13:44.172 4.907 - 4.930: 97.5039% ( 5) 00:13:44.172 4.930 - 4.954: 97.5975% ( 12) 00:13:44.172 4.954 - 4.978: 97.6209% ( 3) 00:13:44.172 4.978 - 5.001: 97.6287% ( 1) 00:13:44.172 5.001 - 5.025: 97.6599% ( 4) 00:13:44.172 5.025 - 5.049: 97.6989% ( 5) 00:13:44.172 5.049 - 5.073: 97.7301% ( 4) 00:13:44.172 5.096 - 5.120: 97.7379% ( 1) 00:13:44.172 5.120 - 5.144: 97.7457% ( 1) 00:13:44.172 5.144 - 5.167: 97.7613% ( 2) 00:13:44.172 5.167 - 5.191: 97.7691% ( 1) 00:13:44.172 5.191 - 5.215: 97.7769% ( 1) 00:13:44.172 5.215 - 5.239: 97.7847% ( 1) 00:13:44.172 5.239 - 5.262: 97.8003% ( 2) 00:13:44.172 5.286 - 5.310: 97.8159% ( 2) 00:13:44.172 5.310 - 5.333: 97.8237% ( 1) 00:13:44.172 5.333 - 5.357: 97.8549% ( 4) 00:13:44.172 5.381 - 5.404: 97.8627% ( 1) 00:13:44.172 5.404 - 5.428: 97.8705% ( 1) 00:13:44.172 5.476 - 5.499: 97.8783% ( 1) 00:13:44.172 5.499 - 5.523: 97.8939% ( 2) 00:13:44.172 5.570 - 5.594: 97.9017% ( 1) 00:13:44.172 5.594 - 5.618: 97.9095% ( 1) 00:13:44.172 5.618 - 5.641: 97.9329% ( 3) 00:13:44.172 5.641 - 5.665: 97.9407% ( 1) 00:13:44.172 5.665 - 5.689: 97.9485% ( 1) 00:13:44.172 5.736 - 5.760: 97.9563% ( 1) 00:13:44.172 5.784 - 5.807: 97.9641% ( 1) 00:13:44.172 5.879 - 5.902: 97.9797% ( 2) 00:13:44.172 6.116 - 6.163: 97.9875% ( 1) 00:13:44.172 6.210 - 6.258: 97.9953% ( 1) 00:13:44.172 6.258 - 6.305: 98.0031% ( 1) 00:13:44.172 6.400 - 6.447: 98.0109% ( 1) 00:13:44.172 6.495 - 6.542: 98.0265% ( 2) 00:13:44.172 6.779 - 6.827: 98.0343% ( 1) 00:13:44.172 6.874 - 6.921: 98.0421% ( 1) 00:13:44.172 7.064 - 7.111: 98.0499% ( 1) 00:13:44.172 7.206 - 7.253: 98.0577% ( 1) 00:13:44.172 7.443 - 7.490: 98.0733% ( 2) 00:13:44.172 7.490 - 7.538: 98.0811% ( 1) 00:13:44.172 7.585 - 7.633: 98.0967% ( 2) 00:13:44.172 7.633 - 7.680: 98.1045% ( 1) 00:13:44.172 7.680 - 7.727: 98.1123% ( 1) 00:13:44.172 7.822 - 7.870: 98.1201% ( 1) 00:13:44.172 7.870 - 7.917: 98.1279% ( 1) 00:13:44.172 7.917 - 7.964: 98.1357% ( 1) 00:13:44.172 8.012 - 8.059: 98.1435% ( 1) 00:13:44.172 8.059 - 8.107: 98.1513% ( 1) 00:13:44.172 8.107 - 8.154: 98.1591% ( 1) 00:13:44.172 8.154 - 8.201: 98.1669% ( 1) 00:13:44.172 8.201 - 8.249: 98.1747% ( 1) 00:13:44.172 8.296 - 8.344: 98.1825% ( 1) 00:13:44.172 8.391 - 8.439: 98.1903% ( 1) 00:13:44.172 8.439 - 8.486: 98.1981% ( 1) 00:13:44.172 8.486 - 8.533: 98.2371% ( 5) 00:13:44.172 8.628 - 8.676: 98.2449% ( 1) 00:13:44.172 8.676 - 8.723: 98.2683% ( 3) 00:13:44.172 8.723 - 8.770: 98.2995% ( 4) 00:13:44.172 8.818 - 8.865: 98.3073% ( 1) 00:13:44.172 8.865 - 8.913: 98.3151% ( 1) 00:13:44.172 8.960 - 9.007: 98.3229% ( 1) 00:13:44.172 9.007 - 9.055: 98.3385% ( 2) 00:13:44.172 9.150 - 9.197: 98.3463% ( 1) 00:13:44.172 9.197 - 9.244: 98.3541% ( 1) 00:13:44.172 9.292 - 9.339: 98.3619% ( 1) 00:13:44.172 9.339 - 9.387: 98.3697% ( 1) 00:13:44.172 9.387 - 9.434: 98.3775% ( 1) 00:13:44.172 9.576 - 9.624: 98.3853% ( 1) 00:13:44.172 9.624 - 9.671: 98.3931% ( 1) 00:13:44.172 9.719 - 9.766: 98.4009% ( 1) 00:13:44.172 9.813 - 9.861: 98.4243% ( 3) 00:13:44.172 9.861 - 9.908: 98.4321% ( 1) 00:13:44.172 9.908 - 9.956: 98.4477% ( 2) 00:13:44.172 10.003 - 10.050: 98.4555% ( 1) 00:13:44.172 10.335 - 10.382: 98.4633% ( 1) 00:13:44.172 10.382 - 10.430: 98.4789% ( 2) 00:13:44.172 10.430 - 10.477: 98.4867% ( 1) 00:13:44.172 10.524 - 10.572: 98.4945% ( 1) 00:13:44.172 10.619 - 10.667: 98.5101% ( 2) 00:13:44.173 10.714 - 10.761: 98.5179% ( 1) 00:13:44.173 10.761 - 10.809: 98.5257% ( 1) 00:13:44.173 10.856 - 10.904: 98.5335% ( 1) 00:13:44.173 10.999 - 11.046: 98.5413% ( 1) 00:13:44.173 11.046 - 11.093: 98.5491% ( 1) 00:13:44.173 11.093 - 11.141: 98.5569% ( 1) 00:13:44.173 11.141 - 11.188: 98.5803% ( 3) 00:13:44.173 11.378 - 11.425: 98.5881% ( 1) 00:13:44.173 11.425 - 11.473: 98.5959% ( 1) 00:13:44.173 11.615 - 11.662: 98.6037% ( 1) 00:13:44.173 11.757 - 11.804: 98.6115% ( 1) 00:13:44.173 11.804 - 11.852: 98.6193% ( 1) 00:13:44.173 11.852 - 11.899: 98.6505% ( 4) 00:13:44.173 11.899 - 11.947: 98.6583% ( 1) 00:13:44.173 11.947 - 11.994: 98.6661% ( 1) 00:13:44.173 12.041 - 12.089: 98.6739% ( 1) 00:13:44.173 12.231 - 12.326: 98.6817% ( 1) 00:13:44.173 12.326 - 12.421: 98.6895% ( 1) 00:13:44.173 12.421 - 12.516: 98.7129% ( 3) 00:13:44.173 12.610 - 12.705: 98.7207% ( 1) 00:13:44.173 12.895 - 12.990: 98.7363% ( 2) 00:13:44.173 12.990 - 13.084: 98.7520% ( 2) 00:13:44.173 13.179 - 13.274: 98.7598% ( 1) 00:13:44.173 13.274 - 13.369: 98.7676% ( 1) 00:13:44.173 13.369 - 13.464: 98.7832% ( 2) 00:13:44.173 13.464 - 13.559: 98.7910% ( 1) 00:13:44.173 13.559 - 13.653: 98.7988% ( 1) 00:13:44.173 13.843 - 13.938: 98.8066% ( 1) 00:13:44.173 14.033 - 14.127: 98.8144% ( 1) 00:13:44.173 14.127 - 14.222: 98.8378% ( 3) 00:13:44.173 14.601 - 14.696: 98.8534% ( 2) 00:13:44.173 14.886 - 14.981: 98.8612% ( 1) 00:13:44.173 15.170 - 15.265: 98.8690% ( 1) 00:13:44.173 17.256 - 17.351: 98.9080% ( 5) 00:13:44.173 17.351 - 17.446: 98.9704% ( 8) 00:13:44.173 17.446 - 17.541: 99.0016% ( 4) 00:13:44.173 17.541 - 17.636: 99.0718% ( 9) 00:13:44.173 17.636 - 17.730: 99.1264% ( 7) 00:13:44.173 17.730 - 17.825: 99.1420% ( 2) 00:13:44.173 17.825 - 17.920: 99.1888% ( 6) 00:13:44.173 17.920 - 18.015: 99.2902% ( 13) 00:13:44.173 18.015 - 18.110: 99.3526% ( 8) 00:13:44.173 18.110 - 18.204: 99.4072% ( 7) 00:13:44.173 18.204 - 18.299: 99.5008% ( 12) 00:13:44.173 18.299 - 18.394: 99.5632% ( 8) 00:13:44.173 18.394 - 18.489: 99.6568% ( 12) 00:13:44.173 18.489 - 18.584: 99.7192% ( 8) 00:13:44.173 18.584 - 18.679: 99.7270% ( 1) 00:13:44.173 18.679 - 18.773: 99.7426% ( 2) 00:13:44.173 18.773 - 18.868: 99.7738% ( 4) 00:13:44.173 18.868 - 18.963: 99.8128% ( 5) 00:13:44.173 18.963 - 19.058: 99.8362% ( 3) 00:13:44.173 19.153 - 19.247: 99.8440% ( 1) 00:13:44.173 20.385 - 20.480: 99.8518% ( 1) 00:13:44.173 20.859 - 20.954: 99.8596% ( 1) 00:13:44.173 22.756 - 22.850: 99.8674% ( 1) 00:13:44.173 24.841 - 25.031: 99.8752% ( 1) 00:13:44.173 27.686 - 27.876: 99.8830% ( 1) 00:13:44.173 27.876 - 28.065: 99.8908% ( 1) 00:13:44.173 28.255 - 28.444: 99.8986% ( 1) 00:13:44.173 28.634 - 28.824: 99.9064% ( 1) 00:13:44.173 3980.705 - 4004.978: 99.9844% ( 10) 00:13:44.173 4004.978 - 4029.250: 100.0000% ( 2) 00:13:44.173 00:13:44.173 Complete histogram 00:13:44.173 ================== 00:13:44.173 Range in us Cumulative Count 00:13:44.173 2.050 - 2.062: 0.3120% ( 40) 00:13:44.173 2.062 - 2.074: 29.7972% ( 3780) 00:13:44.173 2.074 - 2.086: 46.7629% ( 2175) 00:13:44.173 2.086 - 2.098: 48.7832% ( 259) 00:13:44.173 2.098 - 2.110: 57.9563% ( 1176) 00:13:44.173 2.110 - 2.121: 60.6396% ( 344) 00:13:44.173 2.121 - 2.133: 63.1591% ( 323) 00:13:44.173 2.133 - 2.145: 73.5257% ( 1329) 00:13:44.173 2.145 - 2.157: 77.0515% ( 452) 00:13:44.173 2.157 - 2.169: 78.1045% ( 135) 00:13:44.173 2.169 - 2.181: 80.5070% ( 308) 00:13:44.173 2.181 - 2.193: 81.3495% ( 108) 00:13:44.173 2.193 - 2.204: 82.2777% ( 119) 00:13:44.173 2.204 - 2.216: 87.3869% ( 655) 00:13:44.173 2.216 - 2.228: 90.3354% ( 378) 00:13:44.173 2.228 - 2.240: 91.5913% ( 161) 00:13:44.173 2.240 - 2.252: 92.8393% ( 160) 00:13:44.173 2.252 - 2.264: 93.4321% ( 76) 00:13:44.173 2.264 - 2.276: 93.6271% ( 25) 00:13:44.173 2.276 - 2.287: 94.0328% ( 52) 00:13:44.173 2.287 - 2.299: 94.7582% ( 93) 00:13:44.173 2.299 - 2.311: 95.2262% ( 60) 00:13:44.173 2.311 - 2.323: 95.3588% ( 17) 00:13:44.173 2.323 - 2.335: 95.4290% ( 9) 00:13:44.173 2.335 - 2.347: 95.5460% ( 15) 00:13:44.173 2.347 - 2.359: 95.7020% ( 20) 00:13:44.173 2.359 - 2.370: 95.9984% ( 38) 00:13:44.173 2.370 - 2.382: 96.4431% ( 57) 00:13:44.173 2.382 - 2.394: 96.8721% ( 55) 00:13:44.173 2.394 - 2.406: 97.1373% ( 34) 00:13:44.173 2.406 - 2.418: 97.3557% ( 28) 00:13:44.173 2.418 - 2.430: 97.5585% ( 26) 00:13:44.173 2.430 - 2.441: 97.7223% ( 21) 00:13:44.173 2.441 - 2.453: 97.8861% ( 21) 00:13:44.173 2.453 - 2.465: 98.0343% ( 19) 00:13:44.173 2.465 - 2.477: 98.1513% ( 15) 00:13:44.173 2.477 - 2.489: 98.1981% ( 6) 00:13:44.173 2.489 - 2.501: 98.2527% ( 7) 00:13:44.173 2.501 - 2.513: 98.2917% ( 5) 00:13:44.173 2.513 - 2.524: 98.3073% ( 2) 00:13:44.173 2.524 - 2.536: 98.3463% ( 5) 00:13:44.173 2.536 - 2.548: 98.3697% ( 3) 00:13:44.173 2.548 - 2.560: 98.3931% ( 3) 00:13:44.173 2.560 - 2.572: 98.4087% ( 2) 00:13:44.173 2.572 - 2.584: 98.4165% ( 1) 00:13:44.173 2.596 - 2.607: 98.4399% ( 3) 00:13:44.173 2.619 - 2.631: 98.4477% ( 1) 00:13:44.173 2.631 - 2.643: 98.4555% ( 1) 00:13:44.173 2.643 - 2.655: 98.4633% ( 1) 00:13:44.173 2.690 - 2.702: 98.4789% ( 2) 00:13:44.431 2.702 - 2.714: 9[2024-11-15 12:35:24.515631] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:44.431 8.4945% ( 2) 00:13:44.431 2.738 - 2.750: 98.5101% ( 2) 00:13:44.431 3.034 - 3.058: 98.5179% ( 1) 00:13:44.431 3.176 - 3.200: 98.5257% ( 1) 00:13:44.431 3.461 - 3.484: 98.5335% ( 1) 00:13:44.431 3.556 - 3.579: 98.5413% ( 1) 00:13:44.431 3.603 - 3.627: 98.5569% ( 2) 00:13:44.431 3.627 - 3.650: 98.5725% ( 2) 00:13:44.431 3.650 - 3.674: 98.5803% ( 1) 00:13:44.431 3.674 - 3.698: 98.5881% ( 1) 00:13:44.431 3.698 - 3.721: 98.5959% ( 1) 00:13:44.431 3.745 - 3.769: 98.6037% ( 1) 00:13:44.431 3.793 - 3.816: 98.6193% ( 2) 00:13:44.431 3.816 - 3.840: 98.6271% ( 1) 00:13:44.431 3.840 - 3.864: 98.6505% ( 3) 00:13:44.431 3.887 - 3.911: 98.6583% ( 1) 00:13:44.431 3.935 - 3.959: 98.6817% ( 3) 00:13:44.431 4.148 - 4.172: 98.6973% ( 2) 00:13:44.431 4.172 - 4.196: 98.7051% ( 1) 00:13:44.431 4.219 - 4.243: 98.7129% ( 1) 00:13:44.431 4.243 - 4.267: 98.7285% ( 2) 00:13:44.431 5.736 - 5.760: 98.7363% ( 1) 00:13:44.431 6.590 - 6.637: 98.7520% ( 2) 00:13:44.431 6.827 - 6.874: 98.7598% ( 1) 00:13:44.431 6.874 - 6.921: 98.7754% ( 2) 00:13:44.431 6.969 - 7.016: 98.7832% ( 1) 00:13:44.431 7.016 - 7.064: 98.7910% ( 1) 00:13:44.431 7.301 - 7.348: 98.7988% ( 1) 00:13:44.431 7.633 - 7.680: 98.8066% ( 1) 00:13:44.431 7.917 - 7.964: 98.8144% ( 1) 00:13:44.431 8.154 - 8.201: 98.8222% ( 1) 00:13:44.431 10.999 - 11.046: 98.8300% ( 1) 00:13:44.431 11.425 - 11.473: 98.8378% ( 1) 00:13:44.431 11.852 - 11.899: 98.8456% ( 1) 00:13:44.431 15.360 - 15.455: 98.8612% ( 2) 00:13:44.431 15.455 - 15.550: 98.8768% ( 2) 00:13:44.431 15.550 - 15.644: 98.8846% ( 1) 00:13:44.431 15.644 - 15.739: 98.8924% ( 1) 00:13:44.431 15.739 - 15.834: 98.9002% ( 1) 00:13:44.431 15.834 - 15.929: 98.9236% ( 3) 00:13:44.431 15.929 - 16.024: 98.9392% ( 2) 00:13:44.431 16.024 - 16.119: 98.9704% ( 4) 00:13:44.431 16.119 - 16.213: 98.9860% ( 2) 00:13:44.431 16.213 - 16.308: 99.0094% ( 3) 00:13:44.431 16.308 - 16.403: 99.0328% ( 3) 00:13:44.431 16.403 - 16.498: 99.0406% ( 1) 00:13:44.431 16.498 - 16.593: 99.0718% ( 4) 00:13:44.431 16.593 - 16.687: 99.1030% ( 4) 00:13:44.431 16.687 - 16.782: 99.1732% ( 9) 00:13:44.431 16.782 - 16.877: 99.1966% ( 3) 00:13:44.431 16.877 - 16.972: 99.2122% ( 2) 00:13:44.432 16.972 - 17.067: 99.2356% ( 3) 00:13:44.432 17.067 - 17.161: 99.2590% ( 3) 00:13:44.432 17.161 - 17.256: 99.2746% ( 2) 00:13:44.432 17.256 - 17.351: 99.2824% ( 1) 00:13:44.432 17.446 - 17.541: 99.2980% ( 2) 00:13:44.432 17.541 - 17.636: 99.3058% ( 1) 00:13:44.432 18.110 - 18.204: 99.3136% ( 1) 00:13:44.432 19.816 - 19.911: 99.3214% ( 1) 00:13:44.432 3252.527 - 3276.800: 99.3292% ( 1) 00:13:44.432 3835.070 - 3859.342: 99.3370% ( 1) 00:13:44.432 3980.705 - 4004.978: 99.7972% ( 59) 00:13:44.432 4004.978 - 4029.250: 99.9922% ( 25) 00:13:44.432 4029.250 - 4053.523: 100.0000% ( 1) 00:13:44.432 00:13:44.432 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:44.432 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:44.432 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:44.432 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:44.432 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:44.689 [ 00:13:44.689 { 00:13:44.689 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:44.689 "subtype": "Discovery", 00:13:44.689 "listen_addresses": [], 00:13:44.689 "allow_any_host": true, 00:13:44.689 "hosts": [] 00:13:44.689 }, 00:13:44.689 { 00:13:44.689 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:44.689 "subtype": "NVMe", 00:13:44.689 "listen_addresses": [ 00:13:44.689 { 00:13:44.689 "trtype": "VFIOUSER", 00:13:44.689 "adrfam": "IPv4", 00:13:44.689 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:44.689 "trsvcid": "0" 00:13:44.689 } 00:13:44.689 ], 00:13:44.689 "allow_any_host": true, 00:13:44.689 "hosts": [], 00:13:44.689 "serial_number": "SPDK1", 00:13:44.689 "model_number": "SPDK bdev Controller", 00:13:44.689 "max_namespaces": 32, 00:13:44.689 "min_cntlid": 1, 00:13:44.689 "max_cntlid": 65519, 00:13:44.689 "namespaces": [ 00:13:44.689 { 00:13:44.689 "nsid": 1, 00:13:44.689 "bdev_name": "Malloc1", 00:13:44.689 "name": "Malloc1", 00:13:44.689 "nguid": "6F78C57886164694840234139254679A", 00:13:44.689 "uuid": "6f78c578-8616-4694-8402-34139254679a" 00:13:44.689 }, 00:13:44.689 { 00:13:44.689 "nsid": 2, 00:13:44.689 "bdev_name": "Malloc3", 00:13:44.689 "name": "Malloc3", 00:13:44.689 "nguid": "17A8558AA1A148B0995105393D1753D2", 00:13:44.689 "uuid": "17a8558a-a1a1-48b0-9951-05393d1753d2" 00:13:44.689 } 00:13:44.689 ] 00:13:44.689 }, 00:13:44.689 { 00:13:44.689 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:44.689 "subtype": "NVMe", 00:13:44.689 "listen_addresses": [ 00:13:44.689 { 00:13:44.689 "trtype": "VFIOUSER", 00:13:44.689 "adrfam": "IPv4", 00:13:44.689 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:44.689 "trsvcid": "0" 00:13:44.689 } 00:13:44.689 ], 00:13:44.689 "allow_any_host": true, 00:13:44.689 "hosts": [], 00:13:44.689 "serial_number": "SPDK2", 00:13:44.689 "model_number": "SPDK bdev Controller", 00:13:44.689 "max_namespaces": 32, 00:13:44.689 "min_cntlid": 1, 00:13:44.689 "max_cntlid": 65519, 00:13:44.689 "namespaces": [ 00:13:44.689 { 00:13:44.689 "nsid": 1, 00:13:44.689 "bdev_name": "Malloc2", 00:13:44.689 "name": "Malloc2", 00:13:44.689 "nguid": "1FF5518EF7BE4601840BBEE2A3C08874", 00:13:44.689 "uuid": "1ff5518e-f7be-4601-840b-bee2a3c08874" 00:13:44.689 } 00:13:44.689 ] 00:13:44.689 } 00:13:44.689 ] 00:13:44.689 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:44.689 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1001288 00:13:44.689 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:44.689 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:44.689 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:44.689 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:44.689 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:44.689 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:44.689 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:44.689 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:44.948 [2024-11-15 12:35:25.064185] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:44.948 Malloc4 00:13:44.948 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:45.205 [2024-11-15 12:35:25.473277] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:45.205 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:45.205 Asynchronous Event Request test 00:13:45.205 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:45.205 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:45.205 Registering asynchronous event callbacks... 00:13:45.205 Starting namespace attribute notice tests for all controllers... 00:13:45.205 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:45.205 aer_cb - Changed Namespace 00:13:45.205 Cleaning up... 00:13:45.463 [ 00:13:45.463 { 00:13:45.463 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:45.463 "subtype": "Discovery", 00:13:45.463 "listen_addresses": [], 00:13:45.463 "allow_any_host": true, 00:13:45.463 "hosts": [] 00:13:45.463 }, 00:13:45.463 { 00:13:45.463 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:45.463 "subtype": "NVMe", 00:13:45.463 "listen_addresses": [ 00:13:45.463 { 00:13:45.463 "trtype": "VFIOUSER", 00:13:45.463 "adrfam": "IPv4", 00:13:45.463 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:45.463 "trsvcid": "0" 00:13:45.463 } 00:13:45.463 ], 00:13:45.463 "allow_any_host": true, 00:13:45.463 "hosts": [], 00:13:45.463 "serial_number": "SPDK1", 00:13:45.463 "model_number": "SPDK bdev Controller", 00:13:45.463 "max_namespaces": 32, 00:13:45.463 "min_cntlid": 1, 00:13:45.463 "max_cntlid": 65519, 00:13:45.463 "namespaces": [ 00:13:45.463 { 00:13:45.463 "nsid": 1, 00:13:45.463 "bdev_name": "Malloc1", 00:13:45.463 "name": "Malloc1", 00:13:45.463 "nguid": "6F78C57886164694840234139254679A", 00:13:45.463 "uuid": "6f78c578-8616-4694-8402-34139254679a" 00:13:45.463 }, 00:13:45.463 { 00:13:45.463 "nsid": 2, 00:13:45.463 "bdev_name": "Malloc3", 00:13:45.463 "name": "Malloc3", 00:13:45.463 "nguid": "17A8558AA1A148B0995105393D1753D2", 00:13:45.463 "uuid": "17a8558a-a1a1-48b0-9951-05393d1753d2" 00:13:45.463 } 00:13:45.463 ] 00:13:45.463 }, 00:13:45.463 { 00:13:45.463 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:45.463 "subtype": "NVMe", 00:13:45.463 "listen_addresses": [ 00:13:45.464 { 00:13:45.464 "trtype": "VFIOUSER", 00:13:45.464 "adrfam": "IPv4", 00:13:45.464 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:45.464 "trsvcid": "0" 00:13:45.464 } 00:13:45.464 ], 00:13:45.464 "allow_any_host": true, 00:13:45.464 "hosts": [], 00:13:45.464 "serial_number": "SPDK2", 00:13:45.464 "model_number": "SPDK bdev Controller", 00:13:45.464 "max_namespaces": 32, 00:13:45.464 "min_cntlid": 1, 00:13:45.464 "max_cntlid": 65519, 00:13:45.464 "namespaces": [ 00:13:45.464 { 00:13:45.464 "nsid": 1, 00:13:45.464 "bdev_name": "Malloc2", 00:13:45.464 "name": "Malloc2", 00:13:45.464 "nguid": "1FF5518EF7BE4601840BBEE2A3C08874", 00:13:45.464 "uuid": "1ff5518e-f7be-4601-840b-bee2a3c08874" 00:13:45.464 }, 00:13:45.464 { 00:13:45.464 "nsid": 2, 00:13:45.464 "bdev_name": "Malloc4", 00:13:45.464 "name": "Malloc4", 00:13:45.464 "nguid": "657B78235E4D4E2B9BFCAB6E3FFAEE15", 00:13:45.464 "uuid": "657b7823-5e4d-4e2b-9bfc-ab6e3ffaee15" 00:13:45.464 } 00:13:45.464 ] 00:13:45.464 } 00:13:45.464 ] 00:13:45.464 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1001288 00:13:45.464 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:45.464 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 995683 00:13:45.464 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 995683 ']' 00:13:45.464 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 995683 00:13:45.464 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:45.464 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.464 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 995683 00:13:45.464 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.464 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.464 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 995683' 00:13:45.464 killing process with pid 995683 00:13:45.464 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 995683 00:13:45.464 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 995683 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1001431 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1001431' 00:13:46.030 Process pid: 1001431 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1001431 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1001431 ']' 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.030 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:46.030 [2024-11-15 12:35:26.176300] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:46.030 [2024-11-15 12:35:26.177277] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:13:46.030 [2024-11-15 12:35:26.177340] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.030 [2024-11-15 12:35:26.243888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.030 [2024-11-15 12:35:26.304553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.030 [2024-11-15 12:35:26.304611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.030 [2024-11-15 12:35:26.304639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.030 [2024-11-15 12:35:26.304650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.030 [2024-11-15 12:35:26.304660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.030 [2024-11-15 12:35:26.306197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.030 [2024-11-15 12:35:26.306260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.030 [2024-11-15 12:35:26.306325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.030 [2024-11-15 12:35:26.306328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.289 [2024-11-15 12:35:26.401181] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:46.289 [2024-11-15 12:35:26.401414] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:46.289 [2024-11-15 12:35:26.401697] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:46.289 [2024-11-15 12:35:26.402303] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:46.289 [2024-11-15 12:35:26.402543] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:46.289 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:46.289 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:46.289 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:47.223 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:47.482 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:47.482 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:47.482 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:47.482 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:47.482 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:47.741 Malloc1 00:13:47.741 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:48.308 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:48.308 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:48.874 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:48.874 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:48.874 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:48.874 Malloc2 00:13:48.874 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:49.132 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:49.697 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:49.697 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:49.697 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1001431 00:13:49.697 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1001431 ']' 00:13:49.697 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1001431 00:13:49.697 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:49.697 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.955 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1001431 00:13:49.955 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.955 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.955 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1001431' 00:13:49.955 killing process with pid 1001431 00:13:49.955 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1001431 00:13:49.955 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1001431 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:50.213 00:13:50.213 real 0m53.498s 00:13:50.213 user 3m26.764s 00:13:50.213 sys 0m3.921s 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:50.213 ************************************ 00:13:50.213 END TEST nvmf_vfio_user 00:13:50.213 ************************************ 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:50.213 ************************************ 00:13:50.213 START TEST nvmf_vfio_user_nvme_compliance 00:13:50.213 ************************************ 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:50.213 * Looking for test storage... 00:13:50.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:13:50.213 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:50.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.214 --rc genhtml_branch_coverage=1 00:13:50.214 --rc genhtml_function_coverage=1 00:13:50.214 --rc genhtml_legend=1 00:13:50.214 --rc geninfo_all_blocks=1 00:13:50.214 --rc geninfo_unexecuted_blocks=1 00:13:50.214 00:13:50.214 ' 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:50.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.214 --rc genhtml_branch_coverage=1 00:13:50.214 --rc genhtml_function_coverage=1 00:13:50.214 --rc genhtml_legend=1 00:13:50.214 --rc geninfo_all_blocks=1 00:13:50.214 --rc geninfo_unexecuted_blocks=1 00:13:50.214 00:13:50.214 ' 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:50.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.214 --rc genhtml_branch_coverage=1 00:13:50.214 --rc genhtml_function_coverage=1 00:13:50.214 --rc genhtml_legend=1 00:13:50.214 --rc geninfo_all_blocks=1 00:13:50.214 --rc geninfo_unexecuted_blocks=1 00:13:50.214 00:13:50.214 ' 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:50.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.214 --rc genhtml_branch_coverage=1 00:13:50.214 --rc genhtml_function_coverage=1 00:13:50.214 --rc genhtml_legend=1 00:13:50.214 --rc geninfo_all_blocks=1 00:13:50.214 --rc geninfo_unexecuted_blocks=1 00:13:50.214 00:13:50.214 ' 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.214 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:50.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1002042 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1002042' 00:13:50.473 Process pid: 1002042 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1002042 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1002042 ']' 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.473 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:50.473 [2024-11-15 12:35:30.625651] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:13:50.473 [2024-11-15 12:35:30.625755] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.473 [2024-11-15 12:35:30.693332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:50.473 [2024-11-15 12:35:30.747370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.473 [2024-11-15 12:35:30.747425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.473 [2024-11-15 12:35:30.747454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.473 [2024-11-15 12:35:30.747464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.473 [2024-11-15 12:35:30.747473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.473 [2024-11-15 12:35:30.748869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.473 [2024-11-15 12:35:30.748935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.473 [2024-11-15 12:35:30.748939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.731 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.731 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:13:50.731 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:51.664 malloc0 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.664 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:51.922 00:13:51.922 00:13:51.922 CUnit - A unit testing framework for C - Version 2.1-3 00:13:51.922 http://cunit.sourceforge.net/ 00:13:51.922 00:13:51.922 00:13:51.922 Suite: nvme_compliance 00:13:51.922 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-15 12:35:32.117243] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:51.922 [2024-11-15 12:35:32.118686] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:51.922 [2024-11-15 12:35:32.118737] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:51.922 [2024-11-15 12:35:32.118751] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:51.922 [2024-11-15 12:35:32.120268] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:51.922 passed 00:13:51.922 Test: admin_identify_ctrlr_verify_fused ...[2024-11-15 12:35:32.208887] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:51.922 [2024-11-15 12:35:32.211910] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:51.922 passed 00:13:52.180 Test: admin_identify_ns ...[2024-11-15 12:35:32.297302] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.180 [2024-11-15 12:35:32.356739] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:52.180 [2024-11-15 12:35:32.364736] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:52.180 [2024-11-15 12:35:32.385858] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.180 passed 00:13:52.180 Test: admin_get_features_mandatory_features ...[2024-11-15 12:35:32.469247] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.180 [2024-11-15 12:35:32.472268] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.180 passed 00:13:52.438 Test: admin_get_features_optional_features ...[2024-11-15 12:35:32.557841] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.438 [2024-11-15 12:35:32.560859] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.438 passed 00:13:52.438 Test: admin_set_features_number_of_queues ...[2024-11-15 12:35:32.644017] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.438 [2024-11-15 12:35:32.748817] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.695 passed 00:13:52.695 Test: admin_get_log_page_mandatory_logs ...[2024-11-15 12:35:32.833410] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.695 [2024-11-15 12:35:32.838447] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.695 passed 00:13:52.695 Test: admin_get_log_page_with_lpo ...[2024-11-15 12:35:32.922313] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.695 [2024-11-15 12:35:32.989734] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:52.695 [2024-11-15 12:35:33.002828] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.695 passed 00:13:52.952 Test: fabric_property_get ...[2024-11-15 12:35:33.087527] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.952 [2024-11-15 12:35:33.088833] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:52.952 [2024-11-15 12:35:33.090549] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.952 passed 00:13:52.952 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-15 12:35:33.171144] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.952 [2024-11-15 12:35:33.172423] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:52.952 [2024-11-15 12:35:33.176181] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.952 passed 00:13:52.952 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-15 12:35:33.259616] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.209 [2024-11-15 12:35:33.344728] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:53.209 [2024-11-15 12:35:33.360731] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:53.209 [2024-11-15 12:35:33.365822] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.209 passed 00:13:53.209 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-15 12:35:33.449477] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.209 [2024-11-15 12:35:33.450821] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:53.209 [2024-11-15 12:35:33.452516] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.209 passed 00:13:53.209 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-15 12:35:33.533731] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.467 [2024-11-15 12:35:33.610731] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:53.467 [2024-11-15 12:35:33.634741] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:53.467 [2024-11-15 12:35:33.639837] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.467 passed 00:13:53.467 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-15 12:35:33.722045] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.467 [2024-11-15 12:35:33.723385] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:53.467 [2024-11-15 12:35:33.723440] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:53.467 [2024-11-15 12:35:33.725071] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.467 passed 00:13:53.467 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-15 12:35:33.808261] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.725 [2024-11-15 12:35:33.899732] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:53.725 [2024-11-15 12:35:33.907732] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:53.725 [2024-11-15 12:35:33.915729] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:53.725 [2024-11-15 12:35:33.923730] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:53.725 [2024-11-15 12:35:33.952825] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.725 passed 00:13:53.725 Test: admin_create_io_sq_verify_pc ...[2024-11-15 12:35:34.038999] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.725 [2024-11-15 12:35:34.055740] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:53.985 [2024-11-15 12:35:34.073389] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.985 passed 00:13:53.985 Test: admin_create_io_qp_max_qps ...[2024-11-15 12:35:34.154940] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.920 [2024-11-15 12:35:35.243736] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:13:55.484 [2024-11-15 12:35:35.629299] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.484 passed 00:13:55.484 Test: admin_create_io_sq_shared_cq ...[2024-11-15 12:35:35.712273] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.741 [2024-11-15 12:35:35.841731] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:55.741 [2024-11-15 12:35:35.878812] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.741 passed 00:13:55.741 00:13:55.741 Run Summary: Type Total Ran Passed Failed Inactive 00:13:55.741 suites 1 1 n/a 0 0 00:13:55.741 tests 18 18 18 0 0 00:13:55.741 asserts 360 360 360 0 n/a 00:13:55.741 00:13:55.741 Elapsed time = 1.559 seconds 00:13:55.741 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1002042 00:13:55.741 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1002042 ']' 00:13:55.741 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1002042 00:13:55.741 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:13:55.741 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.741 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1002042 00:13:55.741 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:55.741 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:55.741 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1002042' 00:13:55.741 killing process with pid 1002042 00:13:55.741 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1002042 00:13:55.741 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1002042 00:13:55.997 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:55.997 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:55.997 00:13:55.997 real 0m5.819s 00:13:55.997 user 0m16.286s 00:13:55.997 sys 0m0.570s 00:13:55.997 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.997 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:55.997 ************************************ 00:13:55.997 END TEST nvmf_vfio_user_nvme_compliance 00:13:55.997 ************************************ 00:13:55.997 12:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:55.997 12:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:55.997 12:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.997 12:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:55.997 ************************************ 00:13:55.997 START TEST nvmf_vfio_user_fuzz 00:13:55.997 ************************************ 00:13:55.997 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:55.997 * Looking for test storage... 00:13:55.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.997 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:55.997 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:13:55.997 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:56.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.255 --rc genhtml_branch_coverage=1 00:13:56.255 --rc genhtml_function_coverage=1 00:13:56.255 --rc genhtml_legend=1 00:13:56.255 --rc geninfo_all_blocks=1 00:13:56.255 --rc geninfo_unexecuted_blocks=1 00:13:56.255 00:13:56.255 ' 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:56.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.255 --rc genhtml_branch_coverage=1 00:13:56.255 --rc genhtml_function_coverage=1 00:13:56.255 --rc genhtml_legend=1 00:13:56.255 --rc geninfo_all_blocks=1 00:13:56.255 --rc geninfo_unexecuted_blocks=1 00:13:56.255 00:13:56.255 ' 00:13:56.255 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:56.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.255 --rc genhtml_branch_coverage=1 00:13:56.255 --rc genhtml_function_coverage=1 00:13:56.255 --rc genhtml_legend=1 00:13:56.255 --rc geninfo_all_blocks=1 00:13:56.256 --rc geninfo_unexecuted_blocks=1 00:13:56.256 00:13:56.256 ' 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:56.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.256 --rc genhtml_branch_coverage=1 00:13:56.256 --rc genhtml_function_coverage=1 00:13:56.256 --rc genhtml_legend=1 00:13:56.256 --rc geninfo_all_blocks=1 00:13:56.256 --rc geninfo_unexecuted_blocks=1 00:13:56.256 00:13:56.256 ' 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:56.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1002774 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1002774' 00:13:56.256 Process pid: 1002774 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1002774 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1002774 ']' 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.256 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:56.513 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:56.513 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:13:56.513 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:57.446 malloc0 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.446 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:57.713 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.713 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:57.713 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:29.776 Fuzzing completed. Shutting down the fuzz application 00:14:29.776 00:14:29.776 Dumping successful admin opcodes: 00:14:29.776 8, 9, 10, 24, 00:14:29.776 Dumping successful io opcodes: 00:14:29.776 0, 00:14:29.776 NS: 0x20000081ef00 I/O qp, Total commands completed: 677269, total successful commands: 2639, random_seed: 4008985984 00:14:29.776 NS: 0x20000081ef00 admin qp, Total commands completed: 120556, total successful commands: 988, random_seed: 2421115904 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1002774 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1002774 ']' 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1002774 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1002774 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1002774' 00:14:29.776 killing process with pid 1002774 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1002774 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1002774 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:29.776 00:14:29.776 real 0m32.215s 00:14:29.776 user 0m33.786s 00:14:29.776 sys 0m25.762s 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:29.776 ************************************ 00:14:29.776 END TEST nvmf_vfio_user_fuzz 00:14:29.776 ************************************ 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:29.776 ************************************ 00:14:29.776 START TEST nvmf_auth_target 00:14:29.776 ************************************ 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:29.776 * Looking for test storage... 00:14:29.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:29.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.776 --rc genhtml_branch_coverage=1 00:14:29.776 --rc genhtml_function_coverage=1 00:14:29.776 --rc genhtml_legend=1 00:14:29.776 --rc geninfo_all_blocks=1 00:14:29.776 --rc geninfo_unexecuted_blocks=1 00:14:29.776 00:14:29.776 ' 00:14:29.776 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:29.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.777 --rc genhtml_branch_coverage=1 00:14:29.777 --rc genhtml_function_coverage=1 00:14:29.777 --rc genhtml_legend=1 00:14:29.777 --rc geninfo_all_blocks=1 00:14:29.777 --rc geninfo_unexecuted_blocks=1 00:14:29.777 00:14:29.777 ' 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:29.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.777 --rc genhtml_branch_coverage=1 00:14:29.777 --rc genhtml_function_coverage=1 00:14:29.777 --rc genhtml_legend=1 00:14:29.777 --rc geninfo_all_blocks=1 00:14:29.777 --rc geninfo_unexecuted_blocks=1 00:14:29.777 00:14:29.777 ' 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:29.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.777 --rc genhtml_branch_coverage=1 00:14:29.777 --rc genhtml_function_coverage=1 00:14:29.777 --rc genhtml_legend=1 00:14:29.777 --rc geninfo_all_blocks=1 00:14:29.777 --rc geninfo_unexecuted_blocks=1 00:14:29.777 00:14:29.777 ' 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:29.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:29.777 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:30.713 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:30.713 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:30.713 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:30.713 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:30.713 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:30.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:14:30.714 00:14:30.714 --- 10.0.0.2 ping statistics --- 00:14:30.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.714 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:14:30.714 00:14:30.714 --- 10.0.0.1 ping statistics --- 00:14:30.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.714 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:30.714 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:30.714 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:30.714 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:30.714 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:30.714 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.714 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1008847 00:14:30.714 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:30.714 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1008847 00:14:30.714 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1008847 ']' 00:14:30.714 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.714 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.714 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.714 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.714 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.281 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:31.281 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:31.281 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:31.281 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:31.281 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1008867 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ac278c3f4688300a50a81c47baed2bc1fe67babba40c494a 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.L4M 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ac278c3f4688300a50a81c47baed2bc1fe67babba40c494a 0 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ac278c3f4688300a50a81c47baed2bc1fe67babba40c494a 0 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ac278c3f4688300a50a81c47baed2bc1fe67babba40c494a 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.L4M 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.L4M 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.L4M 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9f1887601da44195b23c0f06eaf986e31dfeb6c47dd7ba2dfa5f03857e94a18e 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.LJT 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9f1887601da44195b23c0f06eaf986e31dfeb6c47dd7ba2dfa5f03857e94a18e 3 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9f1887601da44195b23c0f06eaf986e31dfeb6c47dd7ba2dfa5f03857e94a18e 3 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9f1887601da44195b23c0f06eaf986e31dfeb6c47dd7ba2dfa5f03857e94a18e 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.LJT 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.LJT 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.LJT 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=aed66f65718d9ef9d79d2a4b9807aeed 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.v5q 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key aed66f65718d9ef9d79d2a4b9807aeed 1 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 aed66f65718d9ef9d79d2a4b9807aeed 1 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=aed66f65718d9ef9d79d2a4b9807aeed 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.v5q 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.v5q 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.v5q 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1207e57978f23a371729f0a71e188bd69b8cb7ba238c4d95 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.DuK 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1207e57978f23a371729f0a71e188bd69b8cb7ba238c4d95 2 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1207e57978f23a371729f0a71e188bd69b8cb7ba238c4d95 2 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1207e57978f23a371729f0a71e188bd69b8cb7ba238c4d95 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.DuK 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.DuK 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.DuK 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a2097997ac09c2d94d306616ada62e226975bd5d873ae17f 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.qlK 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a2097997ac09c2d94d306616ada62e226975bd5d873ae17f 2 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a2097997ac09c2d94d306616ada62e226975bd5d873ae17f 2 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a2097997ac09c2d94d306616ada62e226975bd5d873ae17f 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:31.282 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.qlK 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.qlK 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.qlK 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fae68689612b54d10c4047b2908a9c69 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.I1R 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fae68689612b54d10c4047b2908a9c69 1 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fae68689612b54d10c4047b2908a9c69 1 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fae68689612b54d10c4047b2908a9c69 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:31.283 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.I1R 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.I1R 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.I1R 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=074a3e618539be0c28e77814e35cd45e3d76cf0e337cc220125a5d9d40932bb5 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.CFI 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 074a3e618539be0c28e77814e35cd45e3d76cf0e337cc220125a5d9d40932bb5 3 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 074a3e618539be0c28e77814e35cd45e3d76cf0e337cc220125a5d9d40932bb5 3 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=074a3e618539be0c28e77814e35cd45e3d76cf0e337cc220125a5d9d40932bb5 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.CFI 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.CFI 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.CFI 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1008847 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1008847 ']' 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.541 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.800 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:31.800 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:31.800 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1008867 /var/tmp/host.sock 00:14:31.800 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1008867 ']' 00:14:31.800 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:31.800 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.800 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:31.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:31.800 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.800 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.058 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.058 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:32.058 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:32.058 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.058 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.058 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.058 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:32.058 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.L4M 00:14:32.058 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.058 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.058 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.058 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.L4M 00:14:32.058 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.L4M 00:14:32.316 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.LJT ]] 00:14:32.316 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LJT 00:14:32.316 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.316 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.316 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.316 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LJT 00:14:32.316 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LJT 00:14:32.574 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:32.574 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.v5q 00:14:32.574 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.574 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.574 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.574 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.v5q 00:14:32.574 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.v5q 00:14:32.832 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.DuK ]] 00:14:32.832 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DuK 00:14:32.832 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.832 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.832 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.832 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DuK 00:14:32.832 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DuK 00:14:33.397 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:33.397 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qlK 00:14:33.397 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.397 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.397 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.397 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.qlK 00:14:33.397 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.qlK 00:14:33.656 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.I1R ]] 00:14:33.656 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I1R 00:14:33.656 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.656 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.656 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.656 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I1R 00:14:33.656 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I1R 00:14:33.914 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:33.914 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.CFI 00:14:33.914 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.914 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.914 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.914 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.CFI 00:14:33.914 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.CFI 00:14:34.173 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:34.173 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:34.173 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:34.173 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.173 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:34.173 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:34.430 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:34.430 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.430 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:34.430 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:34.430 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:34.430 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.430 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.430 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.430 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.430 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.430 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.430 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.430 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.688 00:14:34.688 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.688 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.688 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.945 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.945 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.945 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.945 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.945 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.945 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.945 { 00:14:34.945 "cntlid": 1, 00:14:34.945 "qid": 0, 00:14:34.945 "state": "enabled", 00:14:34.945 "thread": "nvmf_tgt_poll_group_000", 00:14:34.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:34.945 "listen_address": { 00:14:34.945 "trtype": "TCP", 00:14:34.945 "adrfam": "IPv4", 00:14:34.945 "traddr": "10.0.0.2", 00:14:34.945 "trsvcid": "4420" 00:14:34.945 }, 00:14:34.945 "peer_address": { 00:14:34.945 "trtype": "TCP", 00:14:34.945 "adrfam": "IPv4", 00:14:34.945 "traddr": "10.0.0.1", 00:14:34.945 "trsvcid": "60770" 00:14:34.945 }, 00:14:34.945 "auth": { 00:14:34.945 "state": "completed", 00:14:34.945 "digest": "sha256", 00:14:34.945 "dhgroup": "null" 00:14:34.945 } 00:14:34.945 } 00:14:34.945 ]' 00:14:34.945 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.945 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.945 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.945 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:34.945 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.203 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.203 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.203 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.462 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:14:35.462 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:14:36.395 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.395 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:36.395 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.395 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.395 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.395 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.395 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:36.395 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:36.653 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:36.653 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.653 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:36.653 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:36.653 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:36.653 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.653 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.653 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.653 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.653 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.653 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.653 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.653 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.911 00:14:36.911 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.911 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.911 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.169 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.169 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.169 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.169 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.169 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.169 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.169 { 00:14:37.169 "cntlid": 3, 00:14:37.169 "qid": 0, 00:14:37.169 "state": "enabled", 00:14:37.169 "thread": "nvmf_tgt_poll_group_000", 00:14:37.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:37.169 "listen_address": { 00:14:37.169 "trtype": "TCP", 00:14:37.169 "adrfam": "IPv4", 00:14:37.169 "traddr": "10.0.0.2", 00:14:37.169 "trsvcid": "4420" 00:14:37.169 }, 00:14:37.169 "peer_address": { 00:14:37.169 "trtype": "TCP", 00:14:37.169 "adrfam": "IPv4", 00:14:37.169 "traddr": "10.0.0.1", 00:14:37.169 "trsvcid": "60808" 00:14:37.169 }, 00:14:37.169 "auth": { 00:14:37.169 "state": "completed", 00:14:37.169 "digest": "sha256", 00:14:37.169 "dhgroup": "null" 00:14:37.169 } 00:14:37.169 } 00:14:37.169 ]' 00:14:37.169 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.169 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.169 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.169 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:37.169 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.427 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.427 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.428 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.685 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:14:37.685 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.619 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.184 00:14:39.184 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.184 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.184 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.442 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.442 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.442 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.442 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.442 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.442 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.442 { 00:14:39.442 "cntlid": 5, 00:14:39.442 "qid": 0, 00:14:39.442 "state": "enabled", 00:14:39.442 "thread": "nvmf_tgt_poll_group_000", 00:14:39.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:39.442 "listen_address": { 00:14:39.442 "trtype": "TCP", 00:14:39.442 "adrfam": "IPv4", 00:14:39.442 "traddr": "10.0.0.2", 00:14:39.442 "trsvcid": "4420" 00:14:39.442 }, 00:14:39.442 "peer_address": { 00:14:39.442 "trtype": "TCP", 00:14:39.442 "adrfam": "IPv4", 00:14:39.442 "traddr": "10.0.0.1", 00:14:39.442 "trsvcid": "60816" 00:14:39.442 }, 00:14:39.442 "auth": { 00:14:39.442 "state": "completed", 00:14:39.442 "digest": "sha256", 00:14:39.442 "dhgroup": "null" 00:14:39.442 } 00:14:39.442 } 00:14:39.442 ]' 00:14:39.442 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.442 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.442 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.442 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:39.442 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.442 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.442 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.442 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.700 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:14:39.700 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:14:40.634 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.634 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.634 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.634 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.634 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.634 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.634 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:40.634 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:40.892 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:40.892 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.892 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:40.892 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:40.892 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:40.892 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.892 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:40.892 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.892 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.892 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.892 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:40.892 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:40.892 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:41.457 00:14:41.457 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.457 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.457 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.715 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.715 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.715 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.715 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.715 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.715 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.715 { 00:14:41.715 "cntlid": 7, 00:14:41.715 "qid": 0, 00:14:41.715 "state": "enabled", 00:14:41.715 "thread": "nvmf_tgt_poll_group_000", 00:14:41.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:41.715 "listen_address": { 00:14:41.715 "trtype": "TCP", 00:14:41.715 "adrfam": "IPv4", 00:14:41.715 "traddr": "10.0.0.2", 00:14:41.715 "trsvcid": "4420" 00:14:41.715 }, 00:14:41.715 "peer_address": { 00:14:41.715 "trtype": "TCP", 00:14:41.715 "adrfam": "IPv4", 00:14:41.715 "traddr": "10.0.0.1", 00:14:41.715 "trsvcid": "51228" 00:14:41.715 }, 00:14:41.715 "auth": { 00:14:41.715 "state": "completed", 00:14:41.715 "digest": "sha256", 00:14:41.715 "dhgroup": "null" 00:14:41.715 } 00:14:41.715 } 00:14:41.715 ]' 00:14:41.715 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.715 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.715 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.715 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:41.715 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.715 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.715 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.715 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.973 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:14:41.973 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:14:42.906 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.906 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.906 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.906 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.906 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.906 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:42.906 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.906 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:42.906 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:43.471 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:43.471 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.471 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:43.471 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:43.471 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:43.471 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.471 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.471 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.471 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.471 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.471 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.471 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.471 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.729 00:14:43.729 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.729 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.729 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.987 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.987 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.987 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.987 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.987 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.987 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.987 { 00:14:43.987 "cntlid": 9, 00:14:43.987 "qid": 0, 00:14:43.987 "state": "enabled", 00:14:43.987 "thread": "nvmf_tgt_poll_group_000", 00:14:43.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:43.987 "listen_address": { 00:14:43.987 "trtype": "TCP", 00:14:43.987 "adrfam": "IPv4", 00:14:43.987 "traddr": "10.0.0.2", 00:14:43.987 "trsvcid": "4420" 00:14:43.987 }, 00:14:43.987 "peer_address": { 00:14:43.987 "trtype": "TCP", 00:14:43.987 "adrfam": "IPv4", 00:14:43.987 "traddr": "10.0.0.1", 00:14:43.987 "trsvcid": "51244" 00:14:43.987 }, 00:14:43.987 "auth": { 00:14:43.987 "state": "completed", 00:14:43.987 "digest": "sha256", 00:14:43.987 "dhgroup": "ffdhe2048" 00:14:43.987 } 00:14:43.987 } 00:14:43.987 ]' 00:14:43.987 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.987 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.987 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.987 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:43.987 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.987 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.987 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.987 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.245 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:14:44.245 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:14:45.178 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.178 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:45.178 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.178 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.178 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.178 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.178 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:45.178 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:45.744 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:45.744 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.744 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:45.744 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:45.744 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:45.744 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.744 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.744 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.744 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.744 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.744 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.744 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.744 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.002 00:14:46.002 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.002 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.002 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.259 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.259 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.259 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.259 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.259 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.259 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.259 { 00:14:46.259 "cntlid": 11, 00:14:46.259 "qid": 0, 00:14:46.259 "state": "enabled", 00:14:46.259 "thread": "nvmf_tgt_poll_group_000", 00:14:46.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:46.259 "listen_address": { 00:14:46.259 "trtype": "TCP", 00:14:46.259 "adrfam": "IPv4", 00:14:46.259 "traddr": "10.0.0.2", 00:14:46.259 "trsvcid": "4420" 00:14:46.259 }, 00:14:46.259 "peer_address": { 00:14:46.259 "trtype": "TCP", 00:14:46.259 "adrfam": "IPv4", 00:14:46.259 "traddr": "10.0.0.1", 00:14:46.259 "trsvcid": "51276" 00:14:46.259 }, 00:14:46.259 "auth": { 00:14:46.259 "state": "completed", 00:14:46.259 "digest": "sha256", 00:14:46.259 "dhgroup": "ffdhe2048" 00:14:46.259 } 00:14:46.259 } 00:14:46.259 ]' 00:14:46.259 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.259 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.259 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.554 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:46.554 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.554 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.554 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.554 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.862 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:14:46.862 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:14:47.818 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.818 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.818 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.818 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.818 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.818 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.818 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:47.818 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:48.076 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:48.076 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.076 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:48.076 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:48.076 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:48.076 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.076 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.076 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.076 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.076 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.076 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.076 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.076 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.334 00:14:48.334 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.334 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.334 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.592 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.592 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.592 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.592 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.592 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.592 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.592 { 00:14:48.592 "cntlid": 13, 00:14:48.592 "qid": 0, 00:14:48.592 "state": "enabled", 00:14:48.592 "thread": "nvmf_tgt_poll_group_000", 00:14:48.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:48.592 "listen_address": { 00:14:48.592 "trtype": "TCP", 00:14:48.592 "adrfam": "IPv4", 00:14:48.592 "traddr": "10.0.0.2", 00:14:48.592 "trsvcid": "4420" 00:14:48.592 }, 00:14:48.592 "peer_address": { 00:14:48.592 "trtype": "TCP", 00:14:48.592 "adrfam": "IPv4", 00:14:48.592 "traddr": "10.0.0.1", 00:14:48.592 "trsvcid": "51308" 00:14:48.592 }, 00:14:48.592 "auth": { 00:14:48.592 "state": "completed", 00:14:48.592 "digest": "sha256", 00:14:48.592 "dhgroup": "ffdhe2048" 00:14:48.592 } 00:14:48.592 } 00:14:48.592 ]' 00:14:48.592 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.592 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.592 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.592 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:48.592 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.592 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.592 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.592 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:14:49.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.093 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.351 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.351 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:50.351 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.351 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.609 00:14:50.609 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.609 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.609 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.867 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.867 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.867 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.867 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.867 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.867 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.867 { 00:14:50.867 "cntlid": 15, 00:14:50.867 "qid": 0, 00:14:50.867 "state": "enabled", 00:14:50.867 "thread": "nvmf_tgt_poll_group_000", 00:14:50.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:50.867 "listen_address": { 00:14:50.867 "trtype": "TCP", 00:14:50.867 "adrfam": "IPv4", 00:14:50.867 "traddr": "10.0.0.2", 00:14:50.867 "trsvcid": "4420" 00:14:50.867 }, 00:14:50.867 "peer_address": { 00:14:50.867 "trtype": "TCP", 00:14:50.867 "adrfam": "IPv4", 00:14:50.867 "traddr": "10.0.0.1", 00:14:50.867 "trsvcid": "46684" 00:14:50.867 }, 00:14:50.867 "auth": { 00:14:50.867 "state": "completed", 00:14:50.867 "digest": "sha256", 00:14:50.867 "dhgroup": "ffdhe2048" 00:14:50.867 } 00:14:50.867 } 00:14:50.867 ]' 00:14:50.867 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.867 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.867 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.867 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:50.867 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.125 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.125 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.125 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.383 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:14:51.383 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:14:52.317 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.317 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.317 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.317 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.317 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.317 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:52.317 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.317 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:52.317 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:52.575 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:52.575 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.575 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:52.575 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:52.575 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:52.575 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.575 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.575 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.575 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.575 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.575 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.575 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.575 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.833 00:14:52.833 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.833 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.833 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.399 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.399 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.399 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.399 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.399 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.399 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.399 { 00:14:53.399 "cntlid": 17, 00:14:53.399 "qid": 0, 00:14:53.399 "state": "enabled", 00:14:53.399 "thread": "nvmf_tgt_poll_group_000", 00:14:53.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:53.399 "listen_address": { 00:14:53.399 "trtype": "TCP", 00:14:53.399 "adrfam": "IPv4", 00:14:53.399 "traddr": "10.0.0.2", 00:14:53.399 "trsvcid": "4420" 00:14:53.399 }, 00:14:53.399 "peer_address": { 00:14:53.399 "trtype": "TCP", 00:14:53.400 "adrfam": "IPv4", 00:14:53.400 "traddr": "10.0.0.1", 00:14:53.400 "trsvcid": "46714" 00:14:53.400 }, 00:14:53.400 "auth": { 00:14:53.400 "state": "completed", 00:14:53.400 "digest": "sha256", 00:14:53.400 "dhgroup": "ffdhe3072" 00:14:53.400 } 00:14:53.400 } 00:14:53.400 ]' 00:14:53.400 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.400 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.400 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.400 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:53.400 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.400 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.400 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.400 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.658 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:14:53.658 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:14:54.600 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.600 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.600 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.600 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.600 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.600 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.600 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:54.600 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:54.858 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:54.858 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.858 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:54.858 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:54.858 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:54.858 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.858 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.858 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.858 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.858 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.858 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.858 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.858 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.116 00:14:55.116 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.116 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.116 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.374 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.374 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.374 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.374 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.632 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.632 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.632 { 00:14:55.632 "cntlid": 19, 00:14:55.632 "qid": 0, 00:14:55.632 "state": "enabled", 00:14:55.632 "thread": "nvmf_tgt_poll_group_000", 00:14:55.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:55.632 "listen_address": { 00:14:55.632 "trtype": "TCP", 00:14:55.632 "adrfam": "IPv4", 00:14:55.632 "traddr": "10.0.0.2", 00:14:55.632 "trsvcid": "4420" 00:14:55.632 }, 00:14:55.632 "peer_address": { 00:14:55.632 "trtype": "TCP", 00:14:55.632 "adrfam": "IPv4", 00:14:55.632 "traddr": "10.0.0.1", 00:14:55.632 "trsvcid": "46734" 00:14:55.632 }, 00:14:55.632 "auth": { 00:14:55.632 "state": "completed", 00:14:55.632 "digest": "sha256", 00:14:55.632 "dhgroup": "ffdhe3072" 00:14:55.632 } 00:14:55.632 } 00:14:55.632 ]' 00:14:55.632 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.632 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.632 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.632 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:55.632 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.632 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.632 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.632 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.890 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:14:55.890 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:14:56.824 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.824 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.824 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.824 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.824 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.824 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.824 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:56.824 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:57.082 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:57.082 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.082 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:57.082 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:57.082 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:57.082 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.082 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.082 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.082 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.082 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.082 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.082 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.082 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.340 00:14:57.340 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.340 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.340 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.598 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.598 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.598 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.598 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.598 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.598 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.598 { 00:14:57.598 "cntlid": 21, 00:14:57.598 "qid": 0, 00:14:57.598 "state": "enabled", 00:14:57.598 "thread": "nvmf_tgt_poll_group_000", 00:14:57.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:57.598 "listen_address": { 00:14:57.598 "trtype": "TCP", 00:14:57.598 "adrfam": "IPv4", 00:14:57.598 "traddr": "10.0.0.2", 00:14:57.598 "trsvcid": "4420" 00:14:57.598 }, 00:14:57.598 "peer_address": { 00:14:57.598 "trtype": "TCP", 00:14:57.598 "adrfam": "IPv4", 00:14:57.598 "traddr": "10.0.0.1", 00:14:57.598 "trsvcid": "46758" 00:14:57.598 }, 00:14:57.598 "auth": { 00:14:57.598 "state": "completed", 00:14:57.598 "digest": "sha256", 00:14:57.598 "dhgroup": "ffdhe3072" 00:14:57.598 } 00:14:57.598 } 00:14:57.598 ]' 00:14:57.598 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.856 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.856 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.856 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:57.856 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.856 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.856 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.856 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.113 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:14:58.113 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:14:59.047 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.047 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.047 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.047 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.047 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.047 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.047 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:59.047 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:59.305 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:59.305 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.305 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:59.305 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:59.305 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:59.305 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.305 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:59.305 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.305 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.305 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.305 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:59.305 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.305 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.873 00:14:59.873 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.873 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.873 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.131 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.131 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.131 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.131 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.131 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.131 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.131 { 00:15:00.131 "cntlid": 23, 00:15:00.131 "qid": 0, 00:15:00.131 "state": "enabled", 00:15:00.131 "thread": "nvmf_tgt_poll_group_000", 00:15:00.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:00.131 "listen_address": { 00:15:00.131 "trtype": "TCP", 00:15:00.131 "adrfam": "IPv4", 00:15:00.131 "traddr": "10.0.0.2", 00:15:00.131 "trsvcid": "4420" 00:15:00.131 }, 00:15:00.131 "peer_address": { 00:15:00.131 "trtype": "TCP", 00:15:00.131 "adrfam": "IPv4", 00:15:00.131 "traddr": "10.0.0.1", 00:15:00.131 "trsvcid": "46790" 00:15:00.131 }, 00:15:00.131 "auth": { 00:15:00.131 "state": "completed", 00:15:00.131 "digest": "sha256", 00:15:00.131 "dhgroup": "ffdhe3072" 00:15:00.131 } 00:15:00.131 } 00:15:00.131 ]' 00:15:00.131 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.131 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.131 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.131 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:00.131 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.131 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.131 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.131 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.389 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:15:00.389 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:15:01.323 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.323 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:01.323 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.323 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.323 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.323 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.323 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.323 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:01.323 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:01.580 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:01.581 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.581 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.581 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:01.581 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:01.581 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.581 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.581 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.581 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.581 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.581 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.581 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.581 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.147 00:15:02.147 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.147 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.147 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.405 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.405 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.405 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.405 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.405 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.405 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.405 { 00:15:02.405 "cntlid": 25, 00:15:02.405 "qid": 0, 00:15:02.405 "state": "enabled", 00:15:02.405 "thread": "nvmf_tgt_poll_group_000", 00:15:02.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:02.405 "listen_address": { 00:15:02.405 "trtype": "TCP", 00:15:02.405 "adrfam": "IPv4", 00:15:02.405 "traddr": "10.0.0.2", 00:15:02.405 "trsvcid": "4420" 00:15:02.405 }, 00:15:02.405 "peer_address": { 00:15:02.405 "trtype": "TCP", 00:15:02.405 "adrfam": "IPv4", 00:15:02.405 "traddr": "10.0.0.1", 00:15:02.405 "trsvcid": "47774" 00:15:02.405 }, 00:15:02.405 "auth": { 00:15:02.405 "state": "completed", 00:15:02.405 "digest": "sha256", 00:15:02.405 "dhgroup": "ffdhe4096" 00:15:02.405 } 00:15:02.405 } 00:15:02.405 ]' 00:15:02.405 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.405 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.405 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.405 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:02.405 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.405 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.405 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.405 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.663 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:15:02.663 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:15:03.597 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.597 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.597 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.597 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.597 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.597 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.597 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:03.597 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:03.855 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:03.855 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.855 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.855 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:03.855 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:03.855 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.855 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.855 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.855 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.855 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.855 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.855 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.855 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.421 00:15:04.421 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.421 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.421 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.679 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.679 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.679 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.679 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.679 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.679 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.679 { 00:15:04.679 "cntlid": 27, 00:15:04.679 "qid": 0, 00:15:04.679 "state": "enabled", 00:15:04.679 "thread": "nvmf_tgt_poll_group_000", 00:15:04.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:04.679 "listen_address": { 00:15:04.679 "trtype": "TCP", 00:15:04.679 "adrfam": "IPv4", 00:15:04.679 "traddr": "10.0.0.2", 00:15:04.679 "trsvcid": "4420" 00:15:04.679 }, 00:15:04.679 "peer_address": { 00:15:04.679 "trtype": "TCP", 00:15:04.679 "adrfam": "IPv4", 00:15:04.679 "traddr": "10.0.0.1", 00:15:04.679 "trsvcid": "47812" 00:15:04.679 }, 00:15:04.679 "auth": { 00:15:04.679 "state": "completed", 00:15:04.679 "digest": "sha256", 00:15:04.679 "dhgroup": "ffdhe4096" 00:15:04.679 } 00:15:04.679 } 00:15:04.679 ]' 00:15:04.679 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.679 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.679 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.679 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:04.679 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.679 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.679 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.679 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.936 12:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:15:04.936 12:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:15:05.870 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.870 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:05.870 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.870 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.870 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.870 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.870 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:05.870 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:06.128 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:06.128 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.128 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:06.128 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:06.128 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:06.128 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.128 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.128 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.128 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.128 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.128 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.128 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.128 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.693 00:15:06.693 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.693 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.693 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.952 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.952 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.952 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.952 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.952 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.952 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.952 { 00:15:06.952 "cntlid": 29, 00:15:06.952 "qid": 0, 00:15:06.952 "state": "enabled", 00:15:06.952 "thread": "nvmf_tgt_poll_group_000", 00:15:06.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:06.952 "listen_address": { 00:15:06.952 "trtype": "TCP", 00:15:06.952 "adrfam": "IPv4", 00:15:06.952 "traddr": "10.0.0.2", 00:15:06.952 "trsvcid": "4420" 00:15:06.952 }, 00:15:06.952 "peer_address": { 00:15:06.952 "trtype": "TCP", 00:15:06.952 "adrfam": "IPv4", 00:15:06.952 "traddr": "10.0.0.1", 00:15:06.952 "trsvcid": "47850" 00:15:06.952 }, 00:15:06.952 "auth": { 00:15:06.952 "state": "completed", 00:15:06.952 "digest": "sha256", 00:15:06.952 "dhgroup": "ffdhe4096" 00:15:06.952 } 00:15:06.952 } 00:15:06.952 ]' 00:15:06.952 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.952 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.952 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.952 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:06.952 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.952 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.952 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.952 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.518 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:15:07.518 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.452 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.018 00:15:09.018 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.018 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.018 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.276 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.276 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.276 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.276 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.276 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.276 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.276 { 00:15:09.276 "cntlid": 31, 00:15:09.276 "qid": 0, 00:15:09.276 "state": "enabled", 00:15:09.276 "thread": "nvmf_tgt_poll_group_000", 00:15:09.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:09.276 "listen_address": { 00:15:09.276 "trtype": "TCP", 00:15:09.276 "adrfam": "IPv4", 00:15:09.276 "traddr": "10.0.0.2", 00:15:09.276 "trsvcid": "4420" 00:15:09.276 }, 00:15:09.276 "peer_address": { 00:15:09.276 "trtype": "TCP", 00:15:09.276 "adrfam": "IPv4", 00:15:09.276 "traddr": "10.0.0.1", 00:15:09.276 "trsvcid": "47882" 00:15:09.276 }, 00:15:09.276 "auth": { 00:15:09.276 "state": "completed", 00:15:09.276 "digest": "sha256", 00:15:09.276 "dhgroup": "ffdhe4096" 00:15:09.276 } 00:15:09.276 } 00:15:09.276 ]' 00:15:09.276 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.276 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.276 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.276 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:09.276 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.276 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.276 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.276 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.534 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:15:09.534 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:15:10.468 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.468 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.468 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.468 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.468 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.468 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.468 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.468 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:10.468 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:10.726 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:10.726 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.726 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.726 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:10.726 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:10.726 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.726 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.726 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.726 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.726 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.726 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.726 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.726 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.292 00:15:11.292 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.292 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.292 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.550 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.550 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.550 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.550 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.550 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.550 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.550 { 00:15:11.550 "cntlid": 33, 00:15:11.550 "qid": 0, 00:15:11.550 "state": "enabled", 00:15:11.550 "thread": "nvmf_tgt_poll_group_000", 00:15:11.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:11.550 "listen_address": { 00:15:11.550 "trtype": "TCP", 00:15:11.550 "adrfam": "IPv4", 00:15:11.550 "traddr": "10.0.0.2", 00:15:11.550 "trsvcid": "4420" 00:15:11.550 }, 00:15:11.550 "peer_address": { 00:15:11.550 "trtype": "TCP", 00:15:11.550 "adrfam": "IPv4", 00:15:11.550 "traddr": "10.0.0.1", 00:15:11.550 "trsvcid": "48554" 00:15:11.550 }, 00:15:11.550 "auth": { 00:15:11.550 "state": "completed", 00:15:11.550 "digest": "sha256", 00:15:11.550 "dhgroup": "ffdhe6144" 00:15:11.550 } 00:15:11.550 } 00:15:11.550 ]' 00:15:11.550 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.550 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.550 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.808 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:11.808 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.808 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.808 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.808 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.066 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:15:12.066 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:15:12.999 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.999 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:12.999 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.999 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.999 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.999 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.000 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:13.000 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:13.258 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:13.258 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.258 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.258 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:13.258 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:13.258 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.258 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.258 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.258 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.258 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.258 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.258 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.258 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.823 00:15:13.823 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.823 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.823 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.081 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.081 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.081 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.081 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.081 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.081 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.081 { 00:15:14.081 "cntlid": 35, 00:15:14.081 "qid": 0, 00:15:14.081 "state": "enabled", 00:15:14.081 "thread": "nvmf_tgt_poll_group_000", 00:15:14.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:14.081 "listen_address": { 00:15:14.081 "trtype": "TCP", 00:15:14.081 "adrfam": "IPv4", 00:15:14.081 "traddr": "10.0.0.2", 00:15:14.081 "trsvcid": "4420" 00:15:14.081 }, 00:15:14.081 "peer_address": { 00:15:14.081 "trtype": "TCP", 00:15:14.081 "adrfam": "IPv4", 00:15:14.081 "traddr": "10.0.0.1", 00:15:14.081 "trsvcid": "48582" 00:15:14.081 }, 00:15:14.081 "auth": { 00:15:14.081 "state": "completed", 00:15:14.081 "digest": "sha256", 00:15:14.081 "dhgroup": "ffdhe6144" 00:15:14.081 } 00:15:14.081 } 00:15:14.081 ]' 00:15:14.081 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.081 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.081 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.081 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:14.081 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.081 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.081 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.081 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.339 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:15:14.339 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:15:15.272 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.272 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.272 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.272 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.272 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.272 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.272 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:15.272 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:15.530 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:15.530 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.530 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.530 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:15.530 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:15.530 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.530 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.530 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.530 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.530 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.530 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.530 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.530 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.095 00:15:16.095 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.095 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.095 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.353 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.353 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.353 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.353 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.353 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.353 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.353 { 00:15:16.353 "cntlid": 37, 00:15:16.353 "qid": 0, 00:15:16.353 "state": "enabled", 00:15:16.353 "thread": "nvmf_tgt_poll_group_000", 00:15:16.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:16.353 "listen_address": { 00:15:16.353 "trtype": "TCP", 00:15:16.353 "adrfam": "IPv4", 00:15:16.353 "traddr": "10.0.0.2", 00:15:16.353 "trsvcid": "4420" 00:15:16.353 }, 00:15:16.353 "peer_address": { 00:15:16.353 "trtype": "TCP", 00:15:16.353 "adrfam": "IPv4", 00:15:16.353 "traddr": "10.0.0.1", 00:15:16.353 "trsvcid": "48600" 00:15:16.353 }, 00:15:16.353 "auth": { 00:15:16.353 "state": "completed", 00:15:16.353 "digest": "sha256", 00:15:16.353 "dhgroup": "ffdhe6144" 00:15:16.353 } 00:15:16.353 } 00:15:16.353 ]' 00:15:16.611 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.611 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.611 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.611 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:16.611 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.611 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.611 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.611 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.869 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:15:16.869 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:15:17.803 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.803 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.803 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.803 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.803 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.803 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.803 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:17.803 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:18.060 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:18.060 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.060 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.060 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:18.060 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:18.060 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.060 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:18.060 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.060 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.318 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.318 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:18.318 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.318 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.883 00:15:18.883 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.883 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.883 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.142 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.142 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.142 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.142 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.142 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.142 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.142 { 00:15:19.142 "cntlid": 39, 00:15:19.142 "qid": 0, 00:15:19.142 "state": "enabled", 00:15:19.142 "thread": "nvmf_tgt_poll_group_000", 00:15:19.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:19.142 "listen_address": { 00:15:19.142 "trtype": "TCP", 00:15:19.142 "adrfam": "IPv4", 00:15:19.142 "traddr": "10.0.0.2", 00:15:19.142 "trsvcid": "4420" 00:15:19.142 }, 00:15:19.142 "peer_address": { 00:15:19.142 "trtype": "TCP", 00:15:19.142 "adrfam": "IPv4", 00:15:19.142 "traddr": "10.0.0.1", 00:15:19.142 "trsvcid": "48622" 00:15:19.142 }, 00:15:19.142 "auth": { 00:15:19.142 "state": "completed", 00:15:19.142 "digest": "sha256", 00:15:19.142 "dhgroup": "ffdhe6144" 00:15:19.142 } 00:15:19.142 } 00:15:19.142 ]' 00:15:19.142 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.142 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.142 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.142 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:19.142 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.142 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.142 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.142 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.401 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:15:19.401 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:15:20.336 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.336 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.336 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.336 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.336 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.336 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:20.336 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.336 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:20.336 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:20.595 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:20.595 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.595 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.595 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:20.595 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:20.595 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.595 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.595 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.595 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.595 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.595 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.595 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.595 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.527 00:15:21.527 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.527 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.527 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.785 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.785 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.785 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.785 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.785 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.785 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.785 { 00:15:21.785 "cntlid": 41, 00:15:21.785 "qid": 0, 00:15:21.785 "state": "enabled", 00:15:21.785 "thread": "nvmf_tgt_poll_group_000", 00:15:21.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:21.785 "listen_address": { 00:15:21.785 "trtype": "TCP", 00:15:21.785 "adrfam": "IPv4", 00:15:21.785 "traddr": "10.0.0.2", 00:15:21.785 "trsvcid": "4420" 00:15:21.785 }, 00:15:21.785 "peer_address": { 00:15:21.785 "trtype": "TCP", 00:15:21.785 "adrfam": "IPv4", 00:15:21.785 "traddr": "10.0.0.1", 00:15:21.785 "trsvcid": "33908" 00:15:21.785 }, 00:15:21.785 "auth": { 00:15:21.785 "state": "completed", 00:15:21.785 "digest": "sha256", 00:15:21.785 "dhgroup": "ffdhe8192" 00:15:21.785 } 00:15:21.785 } 00:15:21.785 ]' 00:15:21.785 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.785 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.043 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.043 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:22.043 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.043 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.043 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.043 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.300 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:15:22.300 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:15:23.234 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.234 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.234 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.234 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.234 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.234 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.234 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.234 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.492 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:23.492 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.492 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.492 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:23.492 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:23.492 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.492 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.492 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.492 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.492 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.492 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.492 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.492 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.426 00:15:24.426 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.426 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.426 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.426 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.426 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.426 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.426 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.426 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.426 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.426 { 00:15:24.426 "cntlid": 43, 00:15:24.426 "qid": 0, 00:15:24.426 "state": "enabled", 00:15:24.426 "thread": "nvmf_tgt_poll_group_000", 00:15:24.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:24.426 "listen_address": { 00:15:24.426 "trtype": "TCP", 00:15:24.426 "adrfam": "IPv4", 00:15:24.426 "traddr": "10.0.0.2", 00:15:24.426 "trsvcid": "4420" 00:15:24.426 }, 00:15:24.426 "peer_address": { 00:15:24.426 "trtype": "TCP", 00:15:24.426 "adrfam": "IPv4", 00:15:24.426 "traddr": "10.0.0.1", 00:15:24.426 "trsvcid": "33936" 00:15:24.426 }, 00:15:24.426 "auth": { 00:15:24.426 "state": "completed", 00:15:24.426 "digest": "sha256", 00:15:24.426 "dhgroup": "ffdhe8192" 00:15:24.426 } 00:15:24.426 } 00:15:24.426 ]' 00:15:24.426 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.683 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.683 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.683 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.683 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.683 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.683 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.683 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.942 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:15:24.942 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:15:25.876 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.876 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:25.876 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.876 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.876 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.876 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.876 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:25.876 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:26.134 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:26.134 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.134 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.134 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:26.134 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:26.134 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.134 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.134 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.134 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.134 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.134 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.134 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.134 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.066 00:15:27.066 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.066 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.066 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.324 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.324 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.324 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.324 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.324 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.324 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.324 { 00:15:27.324 "cntlid": 45, 00:15:27.324 "qid": 0, 00:15:27.324 "state": "enabled", 00:15:27.324 "thread": "nvmf_tgt_poll_group_000", 00:15:27.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:27.324 "listen_address": { 00:15:27.324 "trtype": "TCP", 00:15:27.324 "adrfam": "IPv4", 00:15:27.324 "traddr": "10.0.0.2", 00:15:27.324 "trsvcid": "4420" 00:15:27.324 }, 00:15:27.324 "peer_address": { 00:15:27.324 "trtype": "TCP", 00:15:27.324 "adrfam": "IPv4", 00:15:27.324 "traddr": "10.0.0.1", 00:15:27.324 "trsvcid": "33964" 00:15:27.324 }, 00:15:27.324 "auth": { 00:15:27.324 "state": "completed", 00:15:27.324 "digest": "sha256", 00:15:27.324 "dhgroup": "ffdhe8192" 00:15:27.324 } 00:15:27.324 } 00:15:27.324 ]' 00:15:27.324 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.324 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.324 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.324 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:27.324 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.324 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.324 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.324 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.582 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:15:27.582 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:15:28.516 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.517 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.517 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.517 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.517 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.517 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.517 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:28.517 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:28.776 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:28.776 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.776 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.776 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:28.776 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:28.776 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.776 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:28.776 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.776 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.776 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.776 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:28.776 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.776 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.710 00:15:29.710 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.710 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.710 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.968 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.968 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.968 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.968 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.968 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.968 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.968 { 00:15:29.968 "cntlid": 47, 00:15:29.968 "qid": 0, 00:15:29.968 "state": "enabled", 00:15:29.968 "thread": "nvmf_tgt_poll_group_000", 00:15:29.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:29.968 "listen_address": { 00:15:29.968 "trtype": "TCP", 00:15:29.968 "adrfam": "IPv4", 00:15:29.968 "traddr": "10.0.0.2", 00:15:29.968 "trsvcid": "4420" 00:15:29.968 }, 00:15:29.968 "peer_address": { 00:15:29.968 "trtype": "TCP", 00:15:29.968 "adrfam": "IPv4", 00:15:29.968 "traddr": "10.0.0.1", 00:15:29.968 "trsvcid": "34000" 00:15:29.968 }, 00:15:29.968 "auth": { 00:15:29.968 "state": "completed", 00:15:29.968 "digest": "sha256", 00:15:29.968 "dhgroup": "ffdhe8192" 00:15:29.968 } 00:15:29.968 } 00:15:29.968 ]' 00:15:29.968 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.968 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.968 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.968 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:29.968 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.226 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.226 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.226 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.484 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:15:30.484 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:15:31.418 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.418 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.418 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.418 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.418 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.418 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:31.418 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:31.418 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.418 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:31.418 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:31.676 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:31.676 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.676 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:31.676 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:31.676 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:31.676 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.676 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.676 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.676 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.676 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.676 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.676 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.676 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.934 00:15:31.934 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.934 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.934 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.192 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.192 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.192 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.192 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.192 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.192 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.192 { 00:15:32.192 "cntlid": 49, 00:15:32.192 "qid": 0, 00:15:32.192 "state": "enabled", 00:15:32.192 "thread": "nvmf_tgt_poll_group_000", 00:15:32.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:32.192 "listen_address": { 00:15:32.192 "trtype": "TCP", 00:15:32.192 "adrfam": "IPv4", 00:15:32.192 "traddr": "10.0.0.2", 00:15:32.192 "trsvcid": "4420" 00:15:32.192 }, 00:15:32.192 "peer_address": { 00:15:32.192 "trtype": "TCP", 00:15:32.192 "adrfam": "IPv4", 00:15:32.192 "traddr": "10.0.0.1", 00:15:32.192 "trsvcid": "54986" 00:15:32.192 }, 00:15:32.192 "auth": { 00:15:32.192 "state": "completed", 00:15:32.192 "digest": "sha384", 00:15:32.192 "dhgroup": "null" 00:15:32.192 } 00:15:32.192 } 00:15:32.192 ]' 00:15:32.192 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.192 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.192 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.192 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:32.192 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.192 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.192 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.192 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.758 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:15:32.758 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.692 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.258 00:15:34.258 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.258 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.258 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.258 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.258 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.516 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.516 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.516 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.516 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.516 { 00:15:34.516 "cntlid": 51, 00:15:34.516 "qid": 0, 00:15:34.516 "state": "enabled", 00:15:34.516 "thread": "nvmf_tgt_poll_group_000", 00:15:34.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:34.516 "listen_address": { 00:15:34.516 "trtype": "TCP", 00:15:34.516 "adrfam": "IPv4", 00:15:34.516 "traddr": "10.0.0.2", 00:15:34.516 "trsvcid": "4420" 00:15:34.516 }, 00:15:34.516 "peer_address": { 00:15:34.516 "trtype": "TCP", 00:15:34.516 "adrfam": "IPv4", 00:15:34.516 "traddr": "10.0.0.1", 00:15:34.516 "trsvcid": "55022" 00:15:34.516 }, 00:15:34.516 "auth": { 00:15:34.516 "state": "completed", 00:15:34.516 "digest": "sha384", 00:15:34.516 "dhgroup": "null" 00:15:34.516 } 00:15:34.516 } 00:15:34.516 ]' 00:15:34.516 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.516 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.516 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.516 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:34.516 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.516 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.516 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.516 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.774 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:15:34.774 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:15:35.708 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.708 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:35.708 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.708 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.708 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.708 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.708 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:35.708 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:35.966 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:35.966 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.966 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:35.966 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:35.966 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:35.966 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.966 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.966 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.966 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.966 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.966 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.966 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.966 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.223 00:15:36.223 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.223 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.224 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.504 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.504 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.504 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.504 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.504 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.504 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.504 { 00:15:36.504 "cntlid": 53, 00:15:36.504 "qid": 0, 00:15:36.504 "state": "enabled", 00:15:36.504 "thread": "nvmf_tgt_poll_group_000", 00:15:36.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:36.504 "listen_address": { 00:15:36.504 "trtype": "TCP", 00:15:36.504 "adrfam": "IPv4", 00:15:36.504 "traddr": "10.0.0.2", 00:15:36.504 "trsvcid": "4420" 00:15:36.504 }, 00:15:36.504 "peer_address": { 00:15:36.504 "trtype": "TCP", 00:15:36.504 "adrfam": "IPv4", 00:15:36.504 "traddr": "10.0.0.1", 00:15:36.504 "trsvcid": "55040" 00:15:36.504 }, 00:15:36.504 "auth": { 00:15:36.504 "state": "completed", 00:15:36.504 "digest": "sha384", 00:15:36.504 "dhgroup": "null" 00:15:36.504 } 00:15:36.504 } 00:15:36.504 ]' 00:15:36.504 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.504 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.504 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.792 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:36.792 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.792 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.792 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.792 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.093 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:15:37.093 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:15:38.026 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.026 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.026 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.026 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.026 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.026 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.026 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:38.026 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:38.285 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:38.285 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.285 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:38.285 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:38.285 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.285 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.285 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:38.285 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.285 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.285 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.285 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.285 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.285 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.543 00:15:38.543 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.543 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.543 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.800 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.800 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.800 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.800 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.800 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.800 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.800 { 00:15:38.800 "cntlid": 55, 00:15:38.800 "qid": 0, 00:15:38.800 "state": "enabled", 00:15:38.800 "thread": "nvmf_tgt_poll_group_000", 00:15:38.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:38.800 "listen_address": { 00:15:38.800 "trtype": "TCP", 00:15:38.800 "adrfam": "IPv4", 00:15:38.800 "traddr": "10.0.0.2", 00:15:38.800 "trsvcid": "4420" 00:15:38.800 }, 00:15:38.800 "peer_address": { 00:15:38.800 "trtype": "TCP", 00:15:38.800 "adrfam": "IPv4", 00:15:38.800 "traddr": "10.0.0.1", 00:15:38.800 "trsvcid": "55068" 00:15:38.800 }, 00:15:38.800 "auth": { 00:15:38.800 "state": "completed", 00:15:38.800 "digest": "sha384", 00:15:38.800 "dhgroup": "null" 00:15:38.800 } 00:15:38.800 } 00:15:38.800 ]' 00:15:38.800 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.800 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.800 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.800 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:38.800 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.800 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.800 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.800 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.059 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:15:39.059 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:15:39.992 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.992 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:39.992 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.992 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.992 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.992 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.992 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.992 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:39.992 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:40.250 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:40.250 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.250 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:40.250 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:40.250 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:40.250 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.250 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.250 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.250 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.250 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.250 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.250 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.250 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.817 00:15:40.817 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.817 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.817 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.075 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.075 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.075 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.075 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.075 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.075 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.075 { 00:15:41.075 "cntlid": 57, 00:15:41.075 "qid": 0, 00:15:41.075 "state": "enabled", 00:15:41.075 "thread": "nvmf_tgt_poll_group_000", 00:15:41.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:41.075 "listen_address": { 00:15:41.075 "trtype": "TCP", 00:15:41.075 "adrfam": "IPv4", 00:15:41.075 "traddr": "10.0.0.2", 00:15:41.075 "trsvcid": "4420" 00:15:41.075 }, 00:15:41.075 "peer_address": { 00:15:41.075 "trtype": "TCP", 00:15:41.075 "adrfam": "IPv4", 00:15:41.075 "traddr": "10.0.0.1", 00:15:41.075 "trsvcid": "47876" 00:15:41.075 }, 00:15:41.075 "auth": { 00:15:41.075 "state": "completed", 00:15:41.075 "digest": "sha384", 00:15:41.075 "dhgroup": "ffdhe2048" 00:15:41.075 } 00:15:41.075 } 00:15:41.075 ]' 00:15:41.075 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.075 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.075 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.075 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:41.075 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.075 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.075 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.075 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.332 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:15:41.332 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:15:42.265 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.265 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.265 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.265 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.265 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.265 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.265 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:42.265 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:42.523 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:42.523 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.523 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.523 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:42.523 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:42.523 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.523 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.523 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.523 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.523 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.523 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.523 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.523 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.781 00:15:43.039 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.039 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.039 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.297 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.297 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.297 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.297 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.297 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.297 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.297 { 00:15:43.297 "cntlid": 59, 00:15:43.297 "qid": 0, 00:15:43.297 "state": "enabled", 00:15:43.297 "thread": "nvmf_tgt_poll_group_000", 00:15:43.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:43.297 "listen_address": { 00:15:43.297 "trtype": "TCP", 00:15:43.297 "adrfam": "IPv4", 00:15:43.297 "traddr": "10.0.0.2", 00:15:43.297 "trsvcid": "4420" 00:15:43.297 }, 00:15:43.297 "peer_address": { 00:15:43.297 "trtype": "TCP", 00:15:43.297 "adrfam": "IPv4", 00:15:43.297 "traddr": "10.0.0.1", 00:15:43.297 "trsvcid": "47888" 00:15:43.297 }, 00:15:43.297 "auth": { 00:15:43.297 "state": "completed", 00:15:43.297 "digest": "sha384", 00:15:43.297 "dhgroup": "ffdhe2048" 00:15:43.297 } 00:15:43.297 } 00:15:43.297 ]' 00:15:43.297 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.297 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.297 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.297 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.297 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.297 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.297 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.297 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.556 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:15:43.556 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:15:44.488 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.488 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:44.488 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.488 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.488 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.488 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.488 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:44.488 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:44.747 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:44.747 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.747 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.747 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:44.747 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:44.747 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.747 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.747 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.747 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.747 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.747 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.747 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.747 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.005 00:15:45.005 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.005 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.005 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.264 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.264 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.264 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.264 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.264 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.264 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.264 { 00:15:45.264 "cntlid": 61, 00:15:45.264 "qid": 0, 00:15:45.264 "state": "enabled", 00:15:45.264 "thread": "nvmf_tgt_poll_group_000", 00:15:45.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:45.264 "listen_address": { 00:15:45.264 "trtype": "TCP", 00:15:45.264 "adrfam": "IPv4", 00:15:45.264 "traddr": "10.0.0.2", 00:15:45.264 "trsvcid": "4420" 00:15:45.264 }, 00:15:45.264 "peer_address": { 00:15:45.264 "trtype": "TCP", 00:15:45.264 "adrfam": "IPv4", 00:15:45.264 "traddr": "10.0.0.1", 00:15:45.264 "trsvcid": "47910" 00:15:45.264 }, 00:15:45.264 "auth": { 00:15:45.264 "state": "completed", 00:15:45.264 "digest": "sha384", 00:15:45.264 "dhgroup": "ffdhe2048" 00:15:45.264 } 00:15:45.264 } 00:15:45.264 ]' 00:15:45.522 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.522 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.522 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.522 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.522 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.522 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.522 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.522 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.780 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:15:45.780 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:15:46.714 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.714 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.714 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.714 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.714 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.714 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.714 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:46.714 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:46.972 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:46.972 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.972 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.972 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:46.972 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:46.972 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.972 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:46.972 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.972 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.972 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.972 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:46.972 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.972 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.231 00:15:47.231 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.231 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.231 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.489 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.489 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.489 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.489 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.489 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.489 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.489 { 00:15:47.489 "cntlid": 63, 00:15:47.489 "qid": 0, 00:15:47.489 "state": "enabled", 00:15:47.489 "thread": "nvmf_tgt_poll_group_000", 00:15:47.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:47.489 "listen_address": { 00:15:47.489 "trtype": "TCP", 00:15:47.489 "adrfam": "IPv4", 00:15:47.489 "traddr": "10.0.0.2", 00:15:47.489 "trsvcid": "4420" 00:15:47.489 }, 00:15:47.489 "peer_address": { 00:15:47.489 "trtype": "TCP", 00:15:47.489 "adrfam": "IPv4", 00:15:47.489 "traddr": "10.0.0.1", 00:15:47.489 "trsvcid": "47930" 00:15:47.489 }, 00:15:47.489 "auth": { 00:15:47.489 "state": "completed", 00:15:47.489 "digest": "sha384", 00:15:47.489 "dhgroup": "ffdhe2048" 00:15:47.489 } 00:15:47.489 } 00:15:47.489 ]' 00:15:47.489 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.489 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.489 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.746 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.746 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.746 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.747 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.747 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.005 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:15:48.005 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:15:48.937 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.937 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.937 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.937 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.937 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.937 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.937 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.937 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:48.937 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.195 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:49.195 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.195 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.195 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:49.195 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.195 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.195 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.195 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.195 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.195 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.195 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.195 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.195 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.453 00:15:49.454 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.454 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.454 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.712 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.712 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.712 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.712 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.970 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.970 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.970 { 00:15:49.970 "cntlid": 65, 00:15:49.970 "qid": 0, 00:15:49.970 "state": "enabled", 00:15:49.970 "thread": "nvmf_tgt_poll_group_000", 00:15:49.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:49.970 "listen_address": { 00:15:49.970 "trtype": "TCP", 00:15:49.970 "adrfam": "IPv4", 00:15:49.970 "traddr": "10.0.0.2", 00:15:49.970 "trsvcid": "4420" 00:15:49.970 }, 00:15:49.970 "peer_address": { 00:15:49.970 "trtype": "TCP", 00:15:49.970 "adrfam": "IPv4", 00:15:49.970 "traddr": "10.0.0.1", 00:15:49.970 "trsvcid": "47952" 00:15:49.970 }, 00:15:49.970 "auth": { 00:15:49.970 "state": "completed", 00:15:49.970 "digest": "sha384", 00:15:49.970 "dhgroup": "ffdhe3072" 00:15:49.970 } 00:15:49.970 } 00:15:49.970 ]' 00:15:49.970 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.970 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.970 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.970 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.970 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.970 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.970 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.970 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.228 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:15:50.228 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:15:51.162 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.162 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.162 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.162 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.162 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.162 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.162 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:51.162 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:51.421 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:51.421 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.421 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.421 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:51.421 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.421 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.421 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.421 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.421 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.421 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.421 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.421 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.421 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.679 00:15:51.679 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.679 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.679 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.937 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.937 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.937 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.937 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.937 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.937 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.937 { 00:15:51.937 "cntlid": 67, 00:15:51.937 "qid": 0, 00:15:51.937 "state": "enabled", 00:15:51.937 "thread": "nvmf_tgt_poll_group_000", 00:15:51.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:51.937 "listen_address": { 00:15:51.937 "trtype": "TCP", 00:15:51.937 "adrfam": "IPv4", 00:15:51.937 "traddr": "10.0.0.2", 00:15:51.937 "trsvcid": "4420" 00:15:51.937 }, 00:15:51.937 "peer_address": { 00:15:51.937 "trtype": "TCP", 00:15:51.937 "adrfam": "IPv4", 00:15:51.937 "traddr": "10.0.0.1", 00:15:51.937 "trsvcid": "56116" 00:15:51.937 }, 00:15:51.937 "auth": { 00:15:51.937 "state": "completed", 00:15:51.937 "digest": "sha384", 00:15:51.937 "dhgroup": "ffdhe3072" 00:15:51.938 } 00:15:51.938 } 00:15:51.938 ]' 00:15:51.938 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.196 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.196 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.196 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.196 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.196 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.196 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.196 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.454 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:15:52.454 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:15:53.386 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.386 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.386 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.386 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.386 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.386 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.386 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:53.386 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:53.645 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:53.645 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.645 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.645 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:53.645 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:53.645 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.645 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.645 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.645 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.645 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.645 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.645 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.645 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.902 00:15:53.902 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.902 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.902 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.160 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.160 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.160 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.160 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.160 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.160 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.160 { 00:15:54.160 "cntlid": 69, 00:15:54.160 "qid": 0, 00:15:54.160 "state": "enabled", 00:15:54.160 "thread": "nvmf_tgt_poll_group_000", 00:15:54.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:54.160 "listen_address": { 00:15:54.160 "trtype": "TCP", 00:15:54.160 "adrfam": "IPv4", 00:15:54.160 "traddr": "10.0.0.2", 00:15:54.160 "trsvcid": "4420" 00:15:54.160 }, 00:15:54.160 "peer_address": { 00:15:54.160 "trtype": "TCP", 00:15:54.161 "adrfam": "IPv4", 00:15:54.161 "traddr": "10.0.0.1", 00:15:54.161 "trsvcid": "56138" 00:15:54.161 }, 00:15:54.161 "auth": { 00:15:54.161 "state": "completed", 00:15:54.161 "digest": "sha384", 00:15:54.161 "dhgroup": "ffdhe3072" 00:15:54.161 } 00:15:54.161 } 00:15:54.161 ]' 00:15:54.161 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.419 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.419 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.419 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:54.419 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.419 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.419 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.419 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.676 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:15:54.676 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:15:55.610 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.610 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.610 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.610 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.610 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.610 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.610 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:55.610 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:55.867 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:55.867 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.868 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.868 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:55.868 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:55.868 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.868 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:55.868 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.868 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.868 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.868 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:55.868 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.868 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.125 00:15:56.125 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.125 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.125 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.382 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.382 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.382 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.382 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.640 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.640 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.640 { 00:15:56.640 "cntlid": 71, 00:15:56.640 "qid": 0, 00:15:56.640 "state": "enabled", 00:15:56.640 "thread": "nvmf_tgt_poll_group_000", 00:15:56.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:56.640 "listen_address": { 00:15:56.640 "trtype": "TCP", 00:15:56.640 "adrfam": "IPv4", 00:15:56.640 "traddr": "10.0.0.2", 00:15:56.640 "trsvcid": "4420" 00:15:56.640 }, 00:15:56.640 "peer_address": { 00:15:56.640 "trtype": "TCP", 00:15:56.640 "adrfam": "IPv4", 00:15:56.640 "traddr": "10.0.0.1", 00:15:56.640 "trsvcid": "56156" 00:15:56.640 }, 00:15:56.640 "auth": { 00:15:56.640 "state": "completed", 00:15:56.640 "digest": "sha384", 00:15:56.640 "dhgroup": "ffdhe3072" 00:15:56.640 } 00:15:56.640 } 00:15:56.640 ]' 00:15:56.640 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.640 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.640 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.640 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:56.640 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.640 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.640 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.640 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.898 12:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:15:56.899 12:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:15:57.829 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.829 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.829 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.829 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.829 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.829 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.829 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.829 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:57.829 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:58.087 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:58.087 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.087 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.087 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:58.087 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:58.087 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.087 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.087 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.087 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.087 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.087 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.087 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.087 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.653 00:15:58.653 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.653 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.653 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.911 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.911 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.911 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.911 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.911 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.911 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.911 { 00:15:58.911 "cntlid": 73, 00:15:58.911 "qid": 0, 00:15:58.911 "state": "enabled", 00:15:58.911 "thread": "nvmf_tgt_poll_group_000", 00:15:58.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:58.911 "listen_address": { 00:15:58.911 "trtype": "TCP", 00:15:58.911 "adrfam": "IPv4", 00:15:58.911 "traddr": "10.0.0.2", 00:15:58.911 "trsvcid": "4420" 00:15:58.911 }, 00:15:58.911 "peer_address": { 00:15:58.911 "trtype": "TCP", 00:15:58.911 "adrfam": "IPv4", 00:15:58.911 "traddr": "10.0.0.1", 00:15:58.911 "trsvcid": "56186" 00:15:58.911 }, 00:15:58.911 "auth": { 00:15:58.911 "state": "completed", 00:15:58.911 "digest": "sha384", 00:15:58.911 "dhgroup": "ffdhe4096" 00:15:58.911 } 00:15:58.911 } 00:15:58.911 ]' 00:15:58.911 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.911 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.911 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.911 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.911 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.911 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.911 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.911 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.169 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:15:59.169 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:16:00.103 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.103 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.103 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.103 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.103 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.103 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.103 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:00.103 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:00.361 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:00.361 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.361 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.361 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:00.361 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:00.361 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.361 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.361 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.361 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.361 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.361 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.361 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.361 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.925 00:16:00.925 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.925 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.925 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.183 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.183 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.183 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.183 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.183 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.183 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.183 { 00:16:01.183 "cntlid": 75, 00:16:01.183 "qid": 0, 00:16:01.183 "state": "enabled", 00:16:01.183 "thread": "nvmf_tgt_poll_group_000", 00:16:01.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:01.183 "listen_address": { 00:16:01.183 "trtype": "TCP", 00:16:01.183 "adrfam": "IPv4", 00:16:01.183 "traddr": "10.0.0.2", 00:16:01.183 "trsvcid": "4420" 00:16:01.183 }, 00:16:01.183 "peer_address": { 00:16:01.183 "trtype": "TCP", 00:16:01.183 "adrfam": "IPv4", 00:16:01.183 "traddr": "10.0.0.1", 00:16:01.183 "trsvcid": "56336" 00:16:01.183 }, 00:16:01.183 "auth": { 00:16:01.183 "state": "completed", 00:16:01.183 "digest": "sha384", 00:16:01.183 "dhgroup": "ffdhe4096" 00:16:01.183 } 00:16:01.183 } 00:16:01.183 ]' 00:16:01.183 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.183 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.183 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.183 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:01.183 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.183 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.183 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.183 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.442 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:16:01.442 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:16:02.375 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.375 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.375 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.375 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.375 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.375 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.375 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:02.375 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:02.633 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:02.633 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.633 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.633 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:02.633 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:02.633 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.633 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.633 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.633 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.633 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.633 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.633 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.633 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.199 00:16:03.199 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.199 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.199 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.457 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.457 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.457 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.457 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.457 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.457 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.457 { 00:16:03.457 "cntlid": 77, 00:16:03.457 "qid": 0, 00:16:03.457 "state": "enabled", 00:16:03.457 "thread": "nvmf_tgt_poll_group_000", 00:16:03.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:03.457 "listen_address": { 00:16:03.457 "trtype": "TCP", 00:16:03.457 "adrfam": "IPv4", 00:16:03.457 "traddr": "10.0.0.2", 00:16:03.457 "trsvcid": "4420" 00:16:03.457 }, 00:16:03.457 "peer_address": { 00:16:03.457 "trtype": "TCP", 00:16:03.457 "adrfam": "IPv4", 00:16:03.457 "traddr": "10.0.0.1", 00:16:03.457 "trsvcid": "56364" 00:16:03.457 }, 00:16:03.457 "auth": { 00:16:03.457 "state": "completed", 00:16:03.457 "digest": "sha384", 00:16:03.457 "dhgroup": "ffdhe4096" 00:16:03.457 } 00:16:03.457 } 00:16:03.457 ]' 00:16:03.457 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.457 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.715 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.715 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:03.715 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.715 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.715 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.715 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.972 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:16:03.972 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:16:04.905 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.905 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.905 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.905 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.905 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.905 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.905 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.905 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:05.161 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:05.161 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.161 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.162 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:05.162 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:05.162 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.162 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:05.162 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.162 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.162 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.162 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:05.162 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.162 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.726 00:16:05.726 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.726 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.726 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.985 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.985 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.985 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.985 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.985 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.985 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.985 { 00:16:05.985 "cntlid": 79, 00:16:05.985 "qid": 0, 00:16:05.985 "state": "enabled", 00:16:05.985 "thread": "nvmf_tgt_poll_group_000", 00:16:05.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:05.985 "listen_address": { 00:16:05.985 "trtype": "TCP", 00:16:05.985 "adrfam": "IPv4", 00:16:05.985 "traddr": "10.0.0.2", 00:16:05.985 "trsvcid": "4420" 00:16:05.985 }, 00:16:05.985 "peer_address": { 00:16:05.985 "trtype": "TCP", 00:16:05.985 "adrfam": "IPv4", 00:16:05.985 "traddr": "10.0.0.1", 00:16:05.985 "trsvcid": "56382" 00:16:05.985 }, 00:16:05.985 "auth": { 00:16:05.985 "state": "completed", 00:16:05.985 "digest": "sha384", 00:16:05.985 "dhgroup": "ffdhe4096" 00:16:05.985 } 00:16:05.985 } 00:16:05.985 ]' 00:16:05.985 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.985 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.985 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.985 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:05.985 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.985 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.985 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.985 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.243 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:16:06.243 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:16:07.177 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.177 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.177 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.177 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.177 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.177 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.177 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.177 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.177 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.435 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:07.435 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.435 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:07.435 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:07.435 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:07.435 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.435 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.435 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.435 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.435 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.435 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.435 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.435 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.002 00:16:08.002 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.002 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.002 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.260 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.260 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.260 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.260 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.260 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.260 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.260 { 00:16:08.260 "cntlid": 81, 00:16:08.260 "qid": 0, 00:16:08.260 "state": "enabled", 00:16:08.260 "thread": "nvmf_tgt_poll_group_000", 00:16:08.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:08.260 "listen_address": { 00:16:08.260 "trtype": "TCP", 00:16:08.260 "adrfam": "IPv4", 00:16:08.260 "traddr": "10.0.0.2", 00:16:08.260 "trsvcid": "4420" 00:16:08.260 }, 00:16:08.260 "peer_address": { 00:16:08.260 "trtype": "TCP", 00:16:08.260 "adrfam": "IPv4", 00:16:08.260 "traddr": "10.0.0.1", 00:16:08.260 "trsvcid": "56416" 00:16:08.260 }, 00:16:08.260 "auth": { 00:16:08.260 "state": "completed", 00:16:08.260 "digest": "sha384", 00:16:08.260 "dhgroup": "ffdhe6144" 00:16:08.260 } 00:16:08.260 } 00:16:08.260 ]' 00:16:08.260 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.260 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.260 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.260 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:08.260 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.518 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.518 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.518 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.776 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:16:08.776 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:16:09.709 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.709 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.709 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.709 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.709 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.709 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.709 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.709 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.967 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:09.967 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.967 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.967 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:09.967 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:09.967 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.967 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.967 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.967 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.967 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.967 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.967 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.967 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.533 00:16:10.533 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.533 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.533 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.792 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.792 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.792 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.792 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.792 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.792 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.792 { 00:16:10.792 "cntlid": 83, 00:16:10.792 "qid": 0, 00:16:10.792 "state": "enabled", 00:16:10.792 "thread": "nvmf_tgt_poll_group_000", 00:16:10.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:10.792 "listen_address": { 00:16:10.792 "trtype": "TCP", 00:16:10.792 "adrfam": "IPv4", 00:16:10.792 "traddr": "10.0.0.2", 00:16:10.792 "trsvcid": "4420" 00:16:10.792 }, 00:16:10.792 "peer_address": { 00:16:10.792 "trtype": "TCP", 00:16:10.792 "adrfam": "IPv4", 00:16:10.792 "traddr": "10.0.0.1", 00:16:10.792 "trsvcid": "56432" 00:16:10.792 }, 00:16:10.792 "auth": { 00:16:10.792 "state": "completed", 00:16:10.792 "digest": "sha384", 00:16:10.792 "dhgroup": "ffdhe6144" 00:16:10.792 } 00:16:10.792 } 00:16:10.792 ]' 00:16:10.792 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.792 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.792 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.792 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.792 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.050 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.050 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.050 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.308 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:16:11.308 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:16:12.242 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.242 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.242 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.242 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.242 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.242 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.242 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:12.242 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:12.500 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:12.500 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.500 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:12.500 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:12.500 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:12.500 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.500 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.500 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.500 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.500 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.500 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.500 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.500 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.066 00:16:13.066 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.066 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.066 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.324 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.324 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.324 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.324 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.324 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.324 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.324 { 00:16:13.324 "cntlid": 85, 00:16:13.324 "qid": 0, 00:16:13.324 "state": "enabled", 00:16:13.324 "thread": "nvmf_tgt_poll_group_000", 00:16:13.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:13.324 "listen_address": { 00:16:13.324 "trtype": "TCP", 00:16:13.324 "adrfam": "IPv4", 00:16:13.324 "traddr": "10.0.0.2", 00:16:13.324 "trsvcid": "4420" 00:16:13.324 }, 00:16:13.324 "peer_address": { 00:16:13.324 "trtype": "TCP", 00:16:13.324 "adrfam": "IPv4", 00:16:13.324 "traddr": "10.0.0.1", 00:16:13.324 "trsvcid": "34846" 00:16:13.324 }, 00:16:13.324 "auth": { 00:16:13.324 "state": "completed", 00:16:13.324 "digest": "sha384", 00:16:13.324 "dhgroup": "ffdhe6144" 00:16:13.324 } 00:16:13.324 } 00:16:13.324 ]' 00:16:13.324 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.324 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.324 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.324 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:13.324 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.324 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.324 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.324 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.889 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:16:13.889 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:16:14.455 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.712 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.712 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.712 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.712 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.712 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.712 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:14.712 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:14.969 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:14.969 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.969 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.969 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:14.969 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:14.969 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.969 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:14.969 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.969 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.969 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.969 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:14.969 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.969 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.535 00:16:15.535 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.535 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.535 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.793 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.793 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.793 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.793 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.793 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.793 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.793 { 00:16:15.793 "cntlid": 87, 00:16:15.793 "qid": 0, 00:16:15.793 "state": "enabled", 00:16:15.793 "thread": "nvmf_tgt_poll_group_000", 00:16:15.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:15.793 "listen_address": { 00:16:15.793 "trtype": "TCP", 00:16:15.793 "adrfam": "IPv4", 00:16:15.793 "traddr": "10.0.0.2", 00:16:15.793 "trsvcid": "4420" 00:16:15.793 }, 00:16:15.793 "peer_address": { 00:16:15.793 "trtype": "TCP", 00:16:15.793 "adrfam": "IPv4", 00:16:15.793 "traddr": "10.0.0.1", 00:16:15.793 "trsvcid": "34866" 00:16:15.793 }, 00:16:15.793 "auth": { 00:16:15.793 "state": "completed", 00:16:15.793 "digest": "sha384", 00:16:15.793 "dhgroup": "ffdhe6144" 00:16:15.793 } 00:16:15.793 } 00:16:15.793 ]' 00:16:15.793 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.793 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.793 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.793 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:15.793 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.793 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.793 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.793 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.051 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:16:16.051 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:16:16.985 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.985 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.985 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.985 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.985 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.985 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.985 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.985 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.985 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:17.243 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:17.243 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.243 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.243 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:17.243 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:17.243 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.243 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.243 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.243 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.243 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.243 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.243 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.243 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.177 00:16:18.177 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.177 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.177 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.435 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.435 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.435 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.435 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.435 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.435 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.435 { 00:16:18.435 "cntlid": 89, 00:16:18.435 "qid": 0, 00:16:18.435 "state": "enabled", 00:16:18.435 "thread": "nvmf_tgt_poll_group_000", 00:16:18.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:18.435 "listen_address": { 00:16:18.435 "trtype": "TCP", 00:16:18.435 "adrfam": "IPv4", 00:16:18.435 "traddr": "10.0.0.2", 00:16:18.435 "trsvcid": "4420" 00:16:18.435 }, 00:16:18.435 "peer_address": { 00:16:18.435 "trtype": "TCP", 00:16:18.435 "adrfam": "IPv4", 00:16:18.435 "traddr": "10.0.0.1", 00:16:18.435 "trsvcid": "34882" 00:16:18.435 }, 00:16:18.435 "auth": { 00:16:18.435 "state": "completed", 00:16:18.435 "digest": "sha384", 00:16:18.435 "dhgroup": "ffdhe8192" 00:16:18.435 } 00:16:18.435 } 00:16:18.435 ]' 00:16:18.435 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.435 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.435 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.435 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:18.435 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.435 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.435 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.435 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.000 12:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:16:19.000 12:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:16:19.934 12:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.934 12:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.934 12:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.934 12:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.934 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.934 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.934 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:19.934 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.193 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:20.193 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.193 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.193 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:20.193 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:20.193 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.193 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.193 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.193 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.193 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.193 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.193 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.193 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.127 00:16:21.127 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.127 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.127 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.385 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.385 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.385 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.385 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.385 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.385 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.385 { 00:16:21.385 "cntlid": 91, 00:16:21.385 "qid": 0, 00:16:21.385 "state": "enabled", 00:16:21.385 "thread": "nvmf_tgt_poll_group_000", 00:16:21.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:21.385 "listen_address": { 00:16:21.385 "trtype": "TCP", 00:16:21.385 "adrfam": "IPv4", 00:16:21.385 "traddr": "10.0.0.2", 00:16:21.385 "trsvcid": "4420" 00:16:21.385 }, 00:16:21.385 "peer_address": { 00:16:21.385 "trtype": "TCP", 00:16:21.385 "adrfam": "IPv4", 00:16:21.385 "traddr": "10.0.0.1", 00:16:21.385 "trsvcid": "34896" 00:16:21.385 }, 00:16:21.385 "auth": { 00:16:21.385 "state": "completed", 00:16:21.385 "digest": "sha384", 00:16:21.385 "dhgroup": "ffdhe8192" 00:16:21.385 } 00:16:21.385 } 00:16:21.385 ]' 00:16:21.385 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.385 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.385 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.385 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.385 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.385 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.385 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.385 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.642 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:16:21.642 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:16:22.575 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.575 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.575 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.575 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.575 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.575 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.575 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:22.575 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:22.833 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:22.833 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.833 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:22.833 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:22.833 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:22.833 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.833 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.833 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.833 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.833 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.833 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.833 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.833 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.767 00:16:23.767 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.767 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.767 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.025 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.025 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.025 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.025 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.025 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.025 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.025 { 00:16:24.025 "cntlid": 93, 00:16:24.025 "qid": 0, 00:16:24.025 "state": "enabled", 00:16:24.025 "thread": "nvmf_tgt_poll_group_000", 00:16:24.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:24.025 "listen_address": { 00:16:24.025 "trtype": "TCP", 00:16:24.025 "adrfam": "IPv4", 00:16:24.025 "traddr": "10.0.0.2", 00:16:24.025 "trsvcid": "4420" 00:16:24.025 }, 00:16:24.025 "peer_address": { 00:16:24.025 "trtype": "TCP", 00:16:24.025 "adrfam": "IPv4", 00:16:24.025 "traddr": "10.0.0.1", 00:16:24.025 "trsvcid": "59726" 00:16:24.025 }, 00:16:24.025 "auth": { 00:16:24.025 "state": "completed", 00:16:24.025 "digest": "sha384", 00:16:24.025 "dhgroup": "ffdhe8192" 00:16:24.025 } 00:16:24.025 } 00:16:24.025 ]' 00:16:24.025 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.025 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.025 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.025 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.025 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.283 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.283 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.283 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.561 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:16:24.561 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:16:25.497 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.497 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.497 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.497 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.497 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.497 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.497 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:25.497 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:25.755 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:25.755 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.755 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.755 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:25.755 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.755 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.755 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:25.755 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.755 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.755 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.755 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.755 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.755 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.358 00:16:26.680 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.680 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.680 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.680 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.680 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.680 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.680 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.680 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.680 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.680 { 00:16:26.680 "cntlid": 95, 00:16:26.680 "qid": 0, 00:16:26.680 "state": "enabled", 00:16:26.680 "thread": "nvmf_tgt_poll_group_000", 00:16:26.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:26.680 "listen_address": { 00:16:26.680 "trtype": "TCP", 00:16:26.680 "adrfam": "IPv4", 00:16:26.680 "traddr": "10.0.0.2", 00:16:26.680 "trsvcid": "4420" 00:16:26.680 }, 00:16:26.680 "peer_address": { 00:16:26.680 "trtype": "TCP", 00:16:26.680 "adrfam": "IPv4", 00:16:26.680 "traddr": "10.0.0.1", 00:16:26.680 "trsvcid": "59760" 00:16:26.680 }, 00:16:26.680 "auth": { 00:16:26.680 "state": "completed", 00:16:26.680 "digest": "sha384", 00:16:26.680 "dhgroup": "ffdhe8192" 00:16:26.680 } 00:16:26.680 } 00:16:26.680 ]' 00:16:26.680 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.963 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.963 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.963 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:26.963 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.963 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.963 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.963 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.220 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:16:27.220 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:16:28.153 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.153 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.153 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.153 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.153 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.153 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:28.153 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.153 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.153 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.153 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.411 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:28.411 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.411 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.411 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:28.411 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.411 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.411 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.411 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.411 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.411 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.411 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.411 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.411 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.669 00:16:28.669 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.669 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.669 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.927 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.927 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.927 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.927 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.927 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.927 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.927 { 00:16:28.927 "cntlid": 97, 00:16:28.927 "qid": 0, 00:16:28.927 "state": "enabled", 00:16:28.927 "thread": "nvmf_tgt_poll_group_000", 00:16:28.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:28.927 "listen_address": { 00:16:28.927 "trtype": "TCP", 00:16:28.927 "adrfam": "IPv4", 00:16:28.927 "traddr": "10.0.0.2", 00:16:28.927 "trsvcid": "4420" 00:16:28.927 }, 00:16:28.927 "peer_address": { 00:16:28.927 "trtype": "TCP", 00:16:28.927 "adrfam": "IPv4", 00:16:28.927 "traddr": "10.0.0.1", 00:16:28.927 "trsvcid": "59800" 00:16:28.927 }, 00:16:28.927 "auth": { 00:16:28.927 "state": "completed", 00:16:28.927 "digest": "sha512", 00:16:28.927 "dhgroup": "null" 00:16:28.927 } 00:16:28.927 } 00:16:28.927 ]' 00:16:28.927 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.927 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.927 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.927 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.927 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.927 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.927 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.927 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.493 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:16:29.493 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.426 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.992 00:16:30.992 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.992 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.992 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.250 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.250 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.250 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.250 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.250 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.250 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.250 { 00:16:31.250 "cntlid": 99, 00:16:31.250 "qid": 0, 00:16:31.250 "state": "enabled", 00:16:31.250 "thread": "nvmf_tgt_poll_group_000", 00:16:31.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:31.250 "listen_address": { 00:16:31.250 "trtype": "TCP", 00:16:31.250 "adrfam": "IPv4", 00:16:31.250 "traddr": "10.0.0.2", 00:16:31.250 "trsvcid": "4420" 00:16:31.250 }, 00:16:31.250 "peer_address": { 00:16:31.250 "trtype": "TCP", 00:16:31.250 "adrfam": "IPv4", 00:16:31.250 "traddr": "10.0.0.1", 00:16:31.250 "trsvcid": "47004" 00:16:31.250 }, 00:16:31.250 "auth": { 00:16:31.250 "state": "completed", 00:16:31.250 "digest": "sha512", 00:16:31.250 "dhgroup": "null" 00:16:31.250 } 00:16:31.250 } 00:16:31.250 ]' 00:16:31.250 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.250 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.250 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.250 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:31.250 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.250 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.250 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.250 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.508 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:16:31.508 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:16:32.442 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.442 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.442 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.442 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.442 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.442 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.442 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:32.442 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:32.700 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:32.700 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.700 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.700 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:32.700 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.700 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.700 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.700 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.700 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.700 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.700 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.700 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.700 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.958 00:16:32.958 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.958 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.958 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.216 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.216 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.216 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.216 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.474 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.474 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.474 { 00:16:33.474 "cntlid": 101, 00:16:33.474 "qid": 0, 00:16:33.474 "state": "enabled", 00:16:33.474 "thread": "nvmf_tgt_poll_group_000", 00:16:33.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:33.474 "listen_address": { 00:16:33.474 "trtype": "TCP", 00:16:33.474 "adrfam": "IPv4", 00:16:33.474 "traddr": "10.0.0.2", 00:16:33.474 "trsvcid": "4420" 00:16:33.474 }, 00:16:33.474 "peer_address": { 00:16:33.474 "trtype": "TCP", 00:16:33.474 "adrfam": "IPv4", 00:16:33.474 "traddr": "10.0.0.1", 00:16:33.474 "trsvcid": "47028" 00:16:33.474 }, 00:16:33.474 "auth": { 00:16:33.474 "state": "completed", 00:16:33.474 "digest": "sha512", 00:16:33.474 "dhgroup": "null" 00:16:33.474 } 00:16:33.474 } 00:16:33.474 ]' 00:16:33.474 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.474 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.474 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.474 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:33.474 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.474 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.474 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.474 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.732 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:16:33.732 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:16:34.665 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.665 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.665 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.665 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.665 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.665 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.665 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:34.665 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:34.923 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:34.923 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.923 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.923 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:34.923 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:34.923 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.923 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:34.923 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.923 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.923 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.923 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:34.923 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.923 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.489 00:16:35.489 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.489 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.489 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.489 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.489 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.489 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.489 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.489 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.489 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.489 { 00:16:35.489 "cntlid": 103, 00:16:35.489 "qid": 0, 00:16:35.489 "state": "enabled", 00:16:35.489 "thread": "nvmf_tgt_poll_group_000", 00:16:35.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:35.489 "listen_address": { 00:16:35.489 "trtype": "TCP", 00:16:35.489 "adrfam": "IPv4", 00:16:35.489 "traddr": "10.0.0.2", 00:16:35.489 "trsvcid": "4420" 00:16:35.489 }, 00:16:35.489 "peer_address": { 00:16:35.489 "trtype": "TCP", 00:16:35.489 "adrfam": "IPv4", 00:16:35.489 "traddr": "10.0.0.1", 00:16:35.489 "trsvcid": "47068" 00:16:35.489 }, 00:16:35.489 "auth": { 00:16:35.489 "state": "completed", 00:16:35.489 "digest": "sha512", 00:16:35.489 "dhgroup": "null" 00:16:35.489 } 00:16:35.489 } 00:16:35.489 ]' 00:16:35.489 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.748 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.748 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.748 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:35.748 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.748 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.748 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.748 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.005 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:16:36.005 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:16:36.939 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.939 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.939 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.939 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.939 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.939 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.939 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.939 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:36.939 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:37.197 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:37.197 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.197 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.197 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:37.197 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:37.197 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.197 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.197 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.197 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.197 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.197 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.197 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.197 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.454 00:16:37.713 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.713 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.713 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.971 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.971 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.971 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.971 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.971 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.971 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.971 { 00:16:37.971 "cntlid": 105, 00:16:37.971 "qid": 0, 00:16:37.971 "state": "enabled", 00:16:37.971 "thread": "nvmf_tgt_poll_group_000", 00:16:37.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:37.971 "listen_address": { 00:16:37.971 "trtype": "TCP", 00:16:37.971 "adrfam": "IPv4", 00:16:37.971 "traddr": "10.0.0.2", 00:16:37.971 "trsvcid": "4420" 00:16:37.971 }, 00:16:37.971 "peer_address": { 00:16:37.971 "trtype": "TCP", 00:16:37.971 "adrfam": "IPv4", 00:16:37.971 "traddr": "10.0.0.1", 00:16:37.971 "trsvcid": "47096" 00:16:37.971 }, 00:16:37.971 "auth": { 00:16:37.971 "state": "completed", 00:16:37.971 "digest": "sha512", 00:16:37.971 "dhgroup": "ffdhe2048" 00:16:37.971 } 00:16:37.971 } 00:16:37.971 ]' 00:16:37.971 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.971 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.971 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.971 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.971 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.971 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.971 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.971 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.229 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:16:38.229 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:16:39.163 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.163 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.163 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.163 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.163 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.163 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.163 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:39.163 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:39.421 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:39.421 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.421 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.421 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:39.421 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:39.421 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.421 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.421 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.421 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.421 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.421 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.421 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.421 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.679 00:16:39.679 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.679 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.679 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.937 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.937 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.937 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.938 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.196 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.196 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.196 { 00:16:40.196 "cntlid": 107, 00:16:40.196 "qid": 0, 00:16:40.196 "state": "enabled", 00:16:40.196 "thread": "nvmf_tgt_poll_group_000", 00:16:40.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:40.196 "listen_address": { 00:16:40.196 "trtype": "TCP", 00:16:40.196 "adrfam": "IPv4", 00:16:40.196 "traddr": "10.0.0.2", 00:16:40.196 "trsvcid": "4420" 00:16:40.196 }, 00:16:40.196 "peer_address": { 00:16:40.196 "trtype": "TCP", 00:16:40.196 "adrfam": "IPv4", 00:16:40.196 "traddr": "10.0.0.1", 00:16:40.196 "trsvcid": "47136" 00:16:40.196 }, 00:16:40.196 "auth": { 00:16:40.196 "state": "completed", 00:16:40.196 "digest": "sha512", 00:16:40.196 "dhgroup": "ffdhe2048" 00:16:40.196 } 00:16:40.196 } 00:16:40.196 ]' 00:16:40.196 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.196 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.196 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.196 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.196 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.196 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.196 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.196 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.454 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:16:40.454 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:16:41.390 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.390 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.390 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.390 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.390 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.390 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.390 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.390 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.729 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:41.729 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.729 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.729 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:41.729 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:41.729 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.729 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.729 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.729 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.729 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.729 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.729 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.729 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.987 00:16:41.987 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.987 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.987 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.245 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.245 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.245 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.245 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.245 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.245 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.245 { 00:16:42.245 "cntlid": 109, 00:16:42.245 "qid": 0, 00:16:42.245 "state": "enabled", 00:16:42.245 "thread": "nvmf_tgt_poll_group_000", 00:16:42.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:42.245 "listen_address": { 00:16:42.245 "trtype": "TCP", 00:16:42.245 "adrfam": "IPv4", 00:16:42.245 "traddr": "10.0.0.2", 00:16:42.245 "trsvcid": "4420" 00:16:42.245 }, 00:16:42.245 "peer_address": { 00:16:42.245 "trtype": "TCP", 00:16:42.245 "adrfam": "IPv4", 00:16:42.245 "traddr": "10.0.0.1", 00:16:42.245 "trsvcid": "57566" 00:16:42.245 }, 00:16:42.245 "auth": { 00:16:42.245 "state": "completed", 00:16:42.245 "digest": "sha512", 00:16:42.245 "dhgroup": "ffdhe2048" 00:16:42.245 } 00:16:42.245 } 00:16:42.245 ]' 00:16:42.245 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.245 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.245 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.245 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.245 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.504 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.504 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.504 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.762 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:16:42.762 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:16:43.696 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.696 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.696 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.696 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.696 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.696 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.696 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:43.696 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:43.955 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:43.955 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.955 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.955 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:43.955 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:43.955 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.955 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:43.955 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.955 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.955 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.955 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:43.955 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.955 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.213 00:16:44.213 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.213 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.213 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.471 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.471 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.471 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.471 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.471 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.471 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.471 { 00:16:44.471 "cntlid": 111, 00:16:44.471 "qid": 0, 00:16:44.471 "state": "enabled", 00:16:44.471 "thread": "nvmf_tgt_poll_group_000", 00:16:44.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:44.471 "listen_address": { 00:16:44.471 "trtype": "TCP", 00:16:44.471 "adrfam": "IPv4", 00:16:44.471 "traddr": "10.0.0.2", 00:16:44.471 "trsvcid": "4420" 00:16:44.471 }, 00:16:44.471 "peer_address": { 00:16:44.471 "trtype": "TCP", 00:16:44.471 "adrfam": "IPv4", 00:16:44.471 "traddr": "10.0.0.1", 00:16:44.471 "trsvcid": "57596" 00:16:44.471 }, 00:16:44.471 "auth": { 00:16:44.471 "state": "completed", 00:16:44.471 "digest": "sha512", 00:16:44.471 "dhgroup": "ffdhe2048" 00:16:44.471 } 00:16:44.471 } 00:16:44.471 ]' 00:16:44.471 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.471 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.471 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.471 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:44.471 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.471 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.471 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.471 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.036 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:16:45.036 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:16:45.601 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.859 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.859 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.859 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.859 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.859 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.859 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.859 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:45.859 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.117 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:46.117 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.117 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.117 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:46.117 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.117 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.117 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.117 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.117 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.117 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.117 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.117 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.117 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.375 00:16:46.375 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.375 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.375 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.633 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.633 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.633 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.633 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.633 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.633 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.633 { 00:16:46.633 "cntlid": 113, 00:16:46.633 "qid": 0, 00:16:46.633 "state": "enabled", 00:16:46.633 "thread": "nvmf_tgt_poll_group_000", 00:16:46.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:46.633 "listen_address": { 00:16:46.633 "trtype": "TCP", 00:16:46.633 "adrfam": "IPv4", 00:16:46.633 "traddr": "10.0.0.2", 00:16:46.633 "trsvcid": "4420" 00:16:46.633 }, 00:16:46.633 "peer_address": { 00:16:46.633 "trtype": "TCP", 00:16:46.633 "adrfam": "IPv4", 00:16:46.633 "traddr": "10.0.0.1", 00:16:46.633 "trsvcid": "57622" 00:16:46.633 }, 00:16:46.633 "auth": { 00:16:46.633 "state": "completed", 00:16:46.633 "digest": "sha512", 00:16:46.633 "dhgroup": "ffdhe3072" 00:16:46.633 } 00:16:46.633 } 00:16:46.633 ]' 00:16:46.633 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.891 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.891 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.891 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.891 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.891 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.891 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.891 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.149 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:16:47.149 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:16:48.082 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.082 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.082 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.082 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.082 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.082 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.082 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:48.082 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:48.340 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:48.340 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.340 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:48.340 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:48.340 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.340 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.340 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.340 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.340 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.340 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.340 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.340 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.340 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.599 00:16:48.599 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.599 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.599 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.856 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.856 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.856 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.856 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.856 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.856 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.856 { 00:16:48.856 "cntlid": 115, 00:16:48.856 "qid": 0, 00:16:48.856 "state": "enabled", 00:16:48.856 "thread": "nvmf_tgt_poll_group_000", 00:16:48.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:48.856 "listen_address": { 00:16:48.856 "trtype": "TCP", 00:16:48.856 "adrfam": "IPv4", 00:16:48.856 "traddr": "10.0.0.2", 00:16:48.856 "trsvcid": "4420" 00:16:48.856 }, 00:16:48.856 "peer_address": { 00:16:48.856 "trtype": "TCP", 00:16:48.856 "adrfam": "IPv4", 00:16:48.856 "traddr": "10.0.0.1", 00:16:48.856 "trsvcid": "57644" 00:16:48.856 }, 00:16:48.856 "auth": { 00:16:48.856 "state": "completed", 00:16:48.856 "digest": "sha512", 00:16:48.856 "dhgroup": "ffdhe3072" 00:16:48.856 } 00:16:48.856 } 00:16:48.856 ]' 00:16:48.856 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.114 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.114 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.114 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:49.114 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.114 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.114 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.114 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.373 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:16:49.373 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:16:50.307 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.307 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.307 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.307 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.307 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.307 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.307 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:50.307 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:50.565 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:50.565 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.565 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.565 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:50.565 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.565 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.565 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.565 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.565 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.565 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.565 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.565 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.565 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.823 00:16:50.823 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.823 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.823 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.390 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.390 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.390 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.390 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.390 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.390 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.390 { 00:16:51.390 "cntlid": 117, 00:16:51.390 "qid": 0, 00:16:51.390 "state": "enabled", 00:16:51.390 "thread": "nvmf_tgt_poll_group_000", 00:16:51.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:51.390 "listen_address": { 00:16:51.390 "trtype": "TCP", 00:16:51.391 "adrfam": "IPv4", 00:16:51.391 "traddr": "10.0.0.2", 00:16:51.391 "trsvcid": "4420" 00:16:51.391 }, 00:16:51.391 "peer_address": { 00:16:51.391 "trtype": "TCP", 00:16:51.391 "adrfam": "IPv4", 00:16:51.391 "traddr": "10.0.0.1", 00:16:51.391 "trsvcid": "48644" 00:16:51.391 }, 00:16:51.391 "auth": { 00:16:51.391 "state": "completed", 00:16:51.391 "digest": "sha512", 00:16:51.391 "dhgroup": "ffdhe3072" 00:16:51.391 } 00:16:51.391 } 00:16:51.391 ]' 00:16:51.391 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.391 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.391 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.391 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.391 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.391 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.391 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.391 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.648 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:16:51.649 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:16:52.582 12:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.582 12:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.582 12:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.582 12:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.582 12:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.582 12:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.582 12:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:52.582 12:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:52.842 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:52.842 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.842 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.842 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:52.842 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.842 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.842 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:52.842 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.842 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.842 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.842 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.842 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.842 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.100 00:16:53.100 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.100 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.100 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.358 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.358 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.358 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.358 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.358 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.358 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.358 { 00:16:53.358 "cntlid": 119, 00:16:53.358 "qid": 0, 00:16:53.358 "state": "enabled", 00:16:53.358 "thread": "nvmf_tgt_poll_group_000", 00:16:53.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:53.358 "listen_address": { 00:16:53.358 "trtype": "TCP", 00:16:53.358 "adrfam": "IPv4", 00:16:53.358 "traddr": "10.0.0.2", 00:16:53.358 "trsvcid": "4420" 00:16:53.358 }, 00:16:53.358 "peer_address": { 00:16:53.358 "trtype": "TCP", 00:16:53.358 "adrfam": "IPv4", 00:16:53.358 "traddr": "10.0.0.1", 00:16:53.358 "trsvcid": "48676" 00:16:53.358 }, 00:16:53.358 "auth": { 00:16:53.358 "state": "completed", 00:16:53.358 "digest": "sha512", 00:16:53.358 "dhgroup": "ffdhe3072" 00:16:53.358 } 00:16:53.358 } 00:16:53.358 ]' 00:16:53.358 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.616 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.616 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.616 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:53.616 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.616 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.616 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.616 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.874 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:16:53.874 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:16:54.807 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.808 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.808 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.808 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.808 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.808 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.808 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.808 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:54.808 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:55.066 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:55.066 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.066 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.066 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:55.066 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.066 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.066 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.066 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.066 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.066 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.066 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.066 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.066 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.631 00:16:55.631 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.631 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.631 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.889 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.889 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.889 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.889 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.889 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.889 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.889 { 00:16:55.889 "cntlid": 121, 00:16:55.889 "qid": 0, 00:16:55.889 "state": "enabled", 00:16:55.889 "thread": "nvmf_tgt_poll_group_000", 00:16:55.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:55.889 "listen_address": { 00:16:55.889 "trtype": "TCP", 00:16:55.889 "adrfam": "IPv4", 00:16:55.889 "traddr": "10.0.0.2", 00:16:55.889 "trsvcid": "4420" 00:16:55.889 }, 00:16:55.889 "peer_address": { 00:16:55.889 "trtype": "TCP", 00:16:55.889 "adrfam": "IPv4", 00:16:55.889 "traddr": "10.0.0.1", 00:16:55.889 "trsvcid": "48708" 00:16:55.889 }, 00:16:55.889 "auth": { 00:16:55.889 "state": "completed", 00:16:55.889 "digest": "sha512", 00:16:55.889 "dhgroup": "ffdhe4096" 00:16:55.889 } 00:16:55.889 } 00:16:55.889 ]' 00:16:55.889 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.889 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.889 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.889 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.889 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.889 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.889 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.889 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.146 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:16:56.146 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:16:57.078 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.078 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.078 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.078 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.078 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.078 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.078 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:57.078 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:57.336 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:57.336 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.336 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.336 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:57.336 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.336 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.336 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.336 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.336 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.336 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.336 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.336 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.336 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.902 00:16:57.902 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.902 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.903 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.160 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.160 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.160 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.161 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.161 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.161 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.161 { 00:16:58.161 "cntlid": 123, 00:16:58.161 "qid": 0, 00:16:58.161 "state": "enabled", 00:16:58.161 "thread": "nvmf_tgt_poll_group_000", 00:16:58.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:58.161 "listen_address": { 00:16:58.161 "trtype": "TCP", 00:16:58.161 "adrfam": "IPv4", 00:16:58.161 "traddr": "10.0.0.2", 00:16:58.161 "trsvcid": "4420" 00:16:58.161 }, 00:16:58.161 "peer_address": { 00:16:58.161 "trtype": "TCP", 00:16:58.161 "adrfam": "IPv4", 00:16:58.161 "traddr": "10.0.0.1", 00:16:58.161 "trsvcid": "48748" 00:16:58.161 }, 00:16:58.161 "auth": { 00:16:58.161 "state": "completed", 00:16:58.161 "digest": "sha512", 00:16:58.161 "dhgroup": "ffdhe4096" 00:16:58.161 } 00:16:58.161 } 00:16:58.161 ]' 00:16:58.161 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.161 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.161 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.161 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.161 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.161 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.161 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.161 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.419 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:16:58.419 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:16:59.353 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.353 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.353 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.353 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.353 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.353 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.353 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:59.353 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:59.611 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:59.611 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.611 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.611 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:59.611 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.611 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.611 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.611 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.611 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.611 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.611 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.611 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.611 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.178 00:17:00.178 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.178 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.178 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.436 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.436 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.436 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.436 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.436 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.436 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.436 { 00:17:00.436 "cntlid": 125, 00:17:00.436 "qid": 0, 00:17:00.436 "state": "enabled", 00:17:00.436 "thread": "nvmf_tgt_poll_group_000", 00:17:00.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:00.436 "listen_address": { 00:17:00.436 "trtype": "TCP", 00:17:00.436 "adrfam": "IPv4", 00:17:00.436 "traddr": "10.0.0.2", 00:17:00.436 "trsvcid": "4420" 00:17:00.436 }, 00:17:00.436 "peer_address": { 00:17:00.436 "trtype": "TCP", 00:17:00.436 "adrfam": "IPv4", 00:17:00.436 "traddr": "10.0.0.1", 00:17:00.436 "trsvcid": "48782" 00:17:00.436 }, 00:17:00.436 "auth": { 00:17:00.436 "state": "completed", 00:17:00.436 "digest": "sha512", 00:17:00.436 "dhgroup": "ffdhe4096" 00:17:00.437 } 00:17:00.437 } 00:17:00.437 ]' 00:17:00.437 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.437 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.437 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.437 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:00.437 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.437 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.437 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.437 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.695 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:17:00.695 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:17:01.629 12:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.629 12:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.629 12:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.629 12:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.629 12:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.629 12:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.629 12:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:01.629 12:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:02.194 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:02.194 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.194 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.194 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:02.194 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:02.194 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.194 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:02.194 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.194 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.194 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.194 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.194 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.194 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.453 00:17:02.453 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.453 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.453 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.711 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.711 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.711 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.711 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.711 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.711 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.711 { 00:17:02.711 "cntlid": 127, 00:17:02.711 "qid": 0, 00:17:02.711 "state": "enabled", 00:17:02.711 "thread": "nvmf_tgt_poll_group_000", 00:17:02.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:02.711 "listen_address": { 00:17:02.711 "trtype": "TCP", 00:17:02.711 "adrfam": "IPv4", 00:17:02.711 "traddr": "10.0.0.2", 00:17:02.711 "trsvcid": "4420" 00:17:02.711 }, 00:17:02.711 "peer_address": { 00:17:02.711 "trtype": "TCP", 00:17:02.711 "adrfam": "IPv4", 00:17:02.711 "traddr": "10.0.0.1", 00:17:02.711 "trsvcid": "50488" 00:17:02.711 }, 00:17:02.711 "auth": { 00:17:02.711 "state": "completed", 00:17:02.711 "digest": "sha512", 00:17:02.711 "dhgroup": "ffdhe4096" 00:17:02.711 } 00:17:02.711 } 00:17:02.711 ]' 00:17:02.711 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.711 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.711 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.969 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:02.969 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.969 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.969 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.969 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.227 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:17:03.227 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:17:04.160 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.160 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.160 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.160 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.160 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.160 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.160 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.160 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.160 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.418 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:04.418 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.418 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:04.418 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:04.418 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:04.418 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.418 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.418 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.418 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.418 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.418 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.418 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.418 12:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.984 00:17:04.984 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.984 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.984 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.241 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.241 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.241 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.242 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.242 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.242 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.242 { 00:17:05.242 "cntlid": 129, 00:17:05.242 "qid": 0, 00:17:05.242 "state": "enabled", 00:17:05.242 "thread": "nvmf_tgt_poll_group_000", 00:17:05.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:05.242 "listen_address": { 00:17:05.242 "trtype": "TCP", 00:17:05.242 "adrfam": "IPv4", 00:17:05.242 "traddr": "10.0.0.2", 00:17:05.242 "trsvcid": "4420" 00:17:05.242 }, 00:17:05.242 "peer_address": { 00:17:05.242 "trtype": "TCP", 00:17:05.242 "adrfam": "IPv4", 00:17:05.242 "traddr": "10.0.0.1", 00:17:05.242 "trsvcid": "50518" 00:17:05.242 }, 00:17:05.242 "auth": { 00:17:05.242 "state": "completed", 00:17:05.242 "digest": "sha512", 00:17:05.242 "dhgroup": "ffdhe6144" 00:17:05.242 } 00:17:05.242 } 00:17:05.242 ]' 00:17:05.242 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.242 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.242 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.242 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.242 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.242 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.242 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.242 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.808 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:17:05.808 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:17:06.742 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.742 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.742 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.742 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.742 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.742 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.742 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:06.742 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.000 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:07.000 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.000 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.000 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:07.000 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:07.000 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.000 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.000 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.000 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.000 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.000 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.000 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.000 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.566 00:17:07.566 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.566 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.566 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.824 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.824 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.824 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.824 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.825 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.825 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.825 { 00:17:07.825 "cntlid": 131, 00:17:07.825 "qid": 0, 00:17:07.825 "state": "enabled", 00:17:07.825 "thread": "nvmf_tgt_poll_group_000", 00:17:07.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:07.825 "listen_address": { 00:17:07.825 "trtype": "TCP", 00:17:07.825 "adrfam": "IPv4", 00:17:07.825 "traddr": "10.0.0.2", 00:17:07.825 "trsvcid": "4420" 00:17:07.825 }, 00:17:07.825 "peer_address": { 00:17:07.825 "trtype": "TCP", 00:17:07.825 "adrfam": "IPv4", 00:17:07.825 "traddr": "10.0.0.1", 00:17:07.825 "trsvcid": "50546" 00:17:07.825 }, 00:17:07.825 "auth": { 00:17:07.825 "state": "completed", 00:17:07.825 "digest": "sha512", 00:17:07.825 "dhgroup": "ffdhe6144" 00:17:07.825 } 00:17:07.825 } 00:17:07.825 ]' 00:17:07.825 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.825 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.825 12:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.825 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.825 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.825 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.825 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.825 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.083 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:17:08.083 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:17:09.017 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.017 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.017 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.017 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.017 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.017 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.017 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:09.017 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:09.275 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:09.275 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.275 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.275 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:09.275 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:09.275 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.275 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.275 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.275 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.275 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.275 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.275 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.275 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.841 00:17:09.841 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.841 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.841 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.099 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.099 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.099 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.099 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.099 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.099 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.099 { 00:17:10.099 "cntlid": 133, 00:17:10.099 "qid": 0, 00:17:10.099 "state": "enabled", 00:17:10.099 "thread": "nvmf_tgt_poll_group_000", 00:17:10.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:10.099 "listen_address": { 00:17:10.099 "trtype": "TCP", 00:17:10.099 "adrfam": "IPv4", 00:17:10.099 "traddr": "10.0.0.2", 00:17:10.099 "trsvcid": "4420" 00:17:10.099 }, 00:17:10.099 "peer_address": { 00:17:10.099 "trtype": "TCP", 00:17:10.099 "adrfam": "IPv4", 00:17:10.099 "traddr": "10.0.0.1", 00:17:10.099 "trsvcid": "50568" 00:17:10.099 }, 00:17:10.099 "auth": { 00:17:10.099 "state": "completed", 00:17:10.099 "digest": "sha512", 00:17:10.099 "dhgroup": "ffdhe6144" 00:17:10.099 } 00:17:10.099 } 00:17:10.099 ]' 00:17:10.099 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.099 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.099 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.357 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:10.357 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.357 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.357 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.357 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.615 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:17:10.615 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:17:11.548 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.548 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.548 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.548 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.548 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.548 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.548 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:11.548 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:11.807 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:11.807 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.807 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.807 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:11.807 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.807 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.807 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:11.807 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.807 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.807 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.807 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.807 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.807 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.373 00:17:12.373 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.373 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.373 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.631 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.631 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.631 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.631 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.631 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.631 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.631 { 00:17:12.631 "cntlid": 135, 00:17:12.631 "qid": 0, 00:17:12.631 "state": "enabled", 00:17:12.631 "thread": "nvmf_tgt_poll_group_000", 00:17:12.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:12.631 "listen_address": { 00:17:12.631 "trtype": "TCP", 00:17:12.631 "adrfam": "IPv4", 00:17:12.631 "traddr": "10.0.0.2", 00:17:12.631 "trsvcid": "4420" 00:17:12.631 }, 00:17:12.631 "peer_address": { 00:17:12.631 "trtype": "TCP", 00:17:12.631 "adrfam": "IPv4", 00:17:12.631 "traddr": "10.0.0.1", 00:17:12.631 "trsvcid": "41592" 00:17:12.631 }, 00:17:12.631 "auth": { 00:17:12.631 "state": "completed", 00:17:12.631 "digest": "sha512", 00:17:12.631 "dhgroup": "ffdhe6144" 00:17:12.631 } 00:17:12.631 } 00:17:12.631 ]' 00:17:12.631 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.631 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.631 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.631 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:12.631 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.631 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.631 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.631 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.889 12:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:17:12.889 12:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:17:13.822 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.822 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:13.822 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.822 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.822 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.822 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.822 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.822 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:13.822 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.081 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:14.081 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.081 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.081 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:14.081 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.081 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.081 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.081 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.081 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.081 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.081 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.081 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.081 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.014 00:17:15.014 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.014 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.014 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.272 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.272 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.272 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.272 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.272 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.272 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.272 { 00:17:15.272 "cntlid": 137, 00:17:15.272 "qid": 0, 00:17:15.272 "state": "enabled", 00:17:15.272 "thread": "nvmf_tgt_poll_group_000", 00:17:15.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:15.272 "listen_address": { 00:17:15.272 "trtype": "TCP", 00:17:15.272 "adrfam": "IPv4", 00:17:15.272 "traddr": "10.0.0.2", 00:17:15.272 "trsvcid": "4420" 00:17:15.272 }, 00:17:15.272 "peer_address": { 00:17:15.272 "trtype": "TCP", 00:17:15.272 "adrfam": "IPv4", 00:17:15.272 "traddr": "10.0.0.1", 00:17:15.272 "trsvcid": "41628" 00:17:15.272 }, 00:17:15.272 "auth": { 00:17:15.272 "state": "completed", 00:17:15.272 "digest": "sha512", 00:17:15.272 "dhgroup": "ffdhe8192" 00:17:15.272 } 00:17:15.272 } 00:17:15.272 ]' 00:17:15.272 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.272 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.272 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.272 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:15.272 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.530 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.530 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.530 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.825 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:17:15.825 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:17:16.474 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.474 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.474 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.474 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.474 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.474 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.474 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:16.474 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:16.732 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:16.732 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.732 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.732 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:16.732 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:16.732 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.732 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.732 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.732 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.991 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.991 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.991 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.991 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.560 00:17:17.560 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.560 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.560 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.126 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.126 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.126 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.126 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.126 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.126 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.126 { 00:17:18.126 "cntlid": 139, 00:17:18.126 "qid": 0, 00:17:18.126 "state": "enabled", 00:17:18.126 "thread": "nvmf_tgt_poll_group_000", 00:17:18.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:18.126 "listen_address": { 00:17:18.126 "trtype": "TCP", 00:17:18.126 "adrfam": "IPv4", 00:17:18.126 "traddr": "10.0.0.2", 00:17:18.126 "trsvcid": "4420" 00:17:18.126 }, 00:17:18.126 "peer_address": { 00:17:18.126 "trtype": "TCP", 00:17:18.126 "adrfam": "IPv4", 00:17:18.126 "traddr": "10.0.0.1", 00:17:18.126 "trsvcid": "41664" 00:17:18.126 }, 00:17:18.126 "auth": { 00:17:18.126 "state": "completed", 00:17:18.126 "digest": "sha512", 00:17:18.126 "dhgroup": "ffdhe8192" 00:17:18.126 } 00:17:18.126 } 00:17:18.126 ]' 00:17:18.126 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.126 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.126 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.127 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.127 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.127 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.127 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.127 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.385 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:17:18.385 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: --dhchap-ctrl-secret DHHC-1:02:MTIwN2U1Nzk3OGYyM2EzNzE3MjlmMGE3MWUxODhiZDY5YjhjYjdiYTIzOGM0ZDk12oSRLQ==: 00:17:19.320 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.320 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:19.320 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.320 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.320 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.320 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.320 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:19.320 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:19.579 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:19.579 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.579 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.579 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:19.579 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.579 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.579 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.579 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.579 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.579 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.579 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.579 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.579 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.513 00:17:20.513 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.513 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.513 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.771 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.771 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.771 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.771 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.771 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.771 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.771 { 00:17:20.771 "cntlid": 141, 00:17:20.771 "qid": 0, 00:17:20.771 "state": "enabled", 00:17:20.771 "thread": "nvmf_tgt_poll_group_000", 00:17:20.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:20.771 "listen_address": { 00:17:20.771 "trtype": "TCP", 00:17:20.771 "adrfam": "IPv4", 00:17:20.771 "traddr": "10.0.0.2", 00:17:20.771 "trsvcid": "4420" 00:17:20.771 }, 00:17:20.771 "peer_address": { 00:17:20.771 "trtype": "TCP", 00:17:20.771 "adrfam": "IPv4", 00:17:20.771 "traddr": "10.0.0.1", 00:17:20.771 "trsvcid": "41690" 00:17:20.771 }, 00:17:20.771 "auth": { 00:17:20.771 "state": "completed", 00:17:20.771 "digest": "sha512", 00:17:20.771 "dhgroup": "ffdhe8192" 00:17:20.771 } 00:17:20.771 } 00:17:20.771 ]' 00:17:20.771 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.771 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.771 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.771 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:20.771 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.771 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.771 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.771 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.336 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:17:21.336 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:01:ZmFlNjg2ODk2MTJiNTRkMTBjNDA0N2IyOTA4YTljNjn2z5RG: 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.270 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.205 00:17:23.205 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.205 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.205 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.464 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.464 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.464 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.464 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.464 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.464 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.464 { 00:17:23.464 "cntlid": 143, 00:17:23.464 "qid": 0, 00:17:23.464 "state": "enabled", 00:17:23.464 "thread": "nvmf_tgt_poll_group_000", 00:17:23.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:23.464 "listen_address": { 00:17:23.464 "trtype": "TCP", 00:17:23.464 "adrfam": "IPv4", 00:17:23.464 "traddr": "10.0.0.2", 00:17:23.464 "trsvcid": "4420" 00:17:23.464 }, 00:17:23.464 "peer_address": { 00:17:23.464 "trtype": "TCP", 00:17:23.464 "adrfam": "IPv4", 00:17:23.464 "traddr": "10.0.0.1", 00:17:23.464 "trsvcid": "40668" 00:17:23.464 }, 00:17:23.464 "auth": { 00:17:23.464 "state": "completed", 00:17:23.464 "digest": "sha512", 00:17:23.464 "dhgroup": "ffdhe8192" 00:17:23.464 } 00:17:23.464 } 00:17:23.464 ]' 00:17:23.464 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.464 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.464 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.464 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.464 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.722 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.722 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.722 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.979 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:17:23.979 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:17:24.910 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.910 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.910 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.910 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.910 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.910 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:24.910 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:24.910 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:24.910 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.910 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.910 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:25.168 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:25.168 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.168 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.168 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:25.168 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.168 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.168 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.168 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.168 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.168 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.168 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.168 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.168 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.101 00:17:26.101 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.101 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.101 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.359 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.359 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.359 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.359 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.359 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.359 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.359 { 00:17:26.359 "cntlid": 145, 00:17:26.359 "qid": 0, 00:17:26.359 "state": "enabled", 00:17:26.359 "thread": "nvmf_tgt_poll_group_000", 00:17:26.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:26.359 "listen_address": { 00:17:26.359 "trtype": "TCP", 00:17:26.359 "adrfam": "IPv4", 00:17:26.359 "traddr": "10.0.0.2", 00:17:26.359 "trsvcid": "4420" 00:17:26.359 }, 00:17:26.359 "peer_address": { 00:17:26.359 "trtype": "TCP", 00:17:26.359 "adrfam": "IPv4", 00:17:26.359 "traddr": "10.0.0.1", 00:17:26.359 "trsvcid": "40688" 00:17:26.359 }, 00:17:26.359 "auth": { 00:17:26.359 "state": "completed", 00:17:26.359 "digest": "sha512", 00:17:26.359 "dhgroup": "ffdhe8192" 00:17:26.359 } 00:17:26.359 } 00:17:26.359 ]' 00:17:26.359 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.359 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.359 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.359 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.359 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.359 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.359 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.359 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.617 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:17:26.617 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YWMyNzhjM2Y0Njg4MzAwYTUwYTgxYzQ3YmFlZDJiYzFmZTY3YmFiYmE0MGM0OTRhwAly4g==: --dhchap-ctrl-secret DHHC-1:03:OWYxODg3NjAxZGE0NDE5NWIyM2MwZjA2ZWFmOTg2ZTMxZGZlYjZjNDdkZDdiYTJkZmE1ZjAzODU3ZTk0YTE4ZcxFZn0=: 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:27.551 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:28.484 request: 00:17:28.484 { 00:17:28.484 "name": "nvme0", 00:17:28.484 "trtype": "tcp", 00:17:28.484 "traddr": "10.0.0.2", 00:17:28.484 "adrfam": "ipv4", 00:17:28.484 "trsvcid": "4420", 00:17:28.484 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:28.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:28.484 "prchk_reftag": false, 00:17:28.484 "prchk_guard": false, 00:17:28.484 "hdgst": false, 00:17:28.485 "ddgst": false, 00:17:28.485 "dhchap_key": "key2", 00:17:28.485 "allow_unrecognized_csi": false, 00:17:28.485 "method": "bdev_nvme_attach_controller", 00:17:28.485 "req_id": 1 00:17:28.485 } 00:17:28.485 Got JSON-RPC error response 00:17:28.485 response: 00:17:28.485 { 00:17:28.485 "code": -5, 00:17:28.485 "message": "Input/output error" 00:17:28.485 } 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:28.485 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:29.051 request: 00:17:29.051 { 00:17:29.051 "name": "nvme0", 00:17:29.051 "trtype": "tcp", 00:17:29.051 "traddr": "10.0.0.2", 00:17:29.051 "adrfam": "ipv4", 00:17:29.051 "trsvcid": "4420", 00:17:29.051 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:29.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:29.051 "prchk_reftag": false, 00:17:29.051 "prchk_guard": false, 00:17:29.051 "hdgst": false, 00:17:29.051 "ddgst": false, 00:17:29.051 "dhchap_key": "key1", 00:17:29.051 "dhchap_ctrlr_key": "ckey2", 00:17:29.051 "allow_unrecognized_csi": false, 00:17:29.051 "method": "bdev_nvme_attach_controller", 00:17:29.051 "req_id": 1 00:17:29.051 } 00:17:29.051 Got JSON-RPC error response 00:17:29.051 response: 00:17:29.051 { 00:17:29.051 "code": -5, 00:17:29.051 "message": "Input/output error" 00:17:29.051 } 00:17:29.051 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:29.051 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.051 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.051 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.051 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:29.051 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.051 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.051 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.051 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:29.051 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.051 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.310 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.310 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.310 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:29.310 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.310 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:29.310 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.310 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:29.310 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.310 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.310 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.310 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.876 request: 00:17:29.876 { 00:17:29.876 "name": "nvme0", 00:17:29.876 "trtype": "tcp", 00:17:29.876 "traddr": "10.0.0.2", 00:17:29.876 "adrfam": "ipv4", 00:17:29.876 "trsvcid": "4420", 00:17:29.876 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:29.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:29.876 "prchk_reftag": false, 00:17:29.876 "prchk_guard": false, 00:17:29.876 "hdgst": false, 00:17:29.876 "ddgst": false, 00:17:29.876 "dhchap_key": "key1", 00:17:29.876 "dhchap_ctrlr_key": "ckey1", 00:17:29.876 "allow_unrecognized_csi": false, 00:17:29.876 "method": "bdev_nvme_attach_controller", 00:17:29.876 "req_id": 1 00:17:29.876 } 00:17:29.876 Got JSON-RPC error response 00:17:29.876 response: 00:17:29.876 { 00:17:29.876 "code": -5, 00:17:29.876 "message": "Input/output error" 00:17:29.876 } 00:17:29.876 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:29.876 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.876 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.876 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.876 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:29.876 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.876 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.876 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.876 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1008847 00:17:29.876 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1008847 ']' 00:17:29.876 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1008847 00:17:29.876 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:29.876 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.876 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1008847 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1008847' 00:17:30.135 killing process with pid 1008847 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1008847 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1008847 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1031729 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1031729 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1031729 ']' 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.135 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.711 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.711 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:30.711 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:30.711 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:30.711 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.711 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.711 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:30.711 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1031729 00:17:30.711 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1031729 ']' 00:17:30.711 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.711 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.711 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.711 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.711 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.970 null0 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.L4M 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.LJT ]] 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LJT 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.v5q 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.DuK ]] 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DuK 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qlK 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.I1R ]] 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I1R 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.970 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.971 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.971 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:30.971 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.CFI 00:17:30.971 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.971 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.230 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.230 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:31.230 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:31.230 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.230 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.230 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:31.230 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:31.230 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.230 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:31.230 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.230 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.230 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.230 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:31.230 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.230 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.605 nvme0n1 00:17:32.605 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.605 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.605 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.863 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.863 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.863 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.863 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.863 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.863 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.863 { 00:17:32.863 "cntlid": 1, 00:17:32.863 "qid": 0, 00:17:32.863 "state": "enabled", 00:17:32.863 "thread": "nvmf_tgt_poll_group_000", 00:17:32.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:32.863 "listen_address": { 00:17:32.863 "trtype": "TCP", 00:17:32.863 "adrfam": "IPv4", 00:17:32.863 "traddr": "10.0.0.2", 00:17:32.863 "trsvcid": "4420" 00:17:32.863 }, 00:17:32.863 "peer_address": { 00:17:32.863 "trtype": "TCP", 00:17:32.863 "adrfam": "IPv4", 00:17:32.863 "traddr": "10.0.0.1", 00:17:32.863 "trsvcid": "56486" 00:17:32.863 }, 00:17:32.863 "auth": { 00:17:32.863 "state": "completed", 00:17:32.863 "digest": "sha512", 00:17:32.863 "dhgroup": "ffdhe8192" 00:17:32.863 } 00:17:32.863 } 00:17:32.863 ]' 00:17:32.863 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.863 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.863 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.863 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.863 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.863 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.863 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.863 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.121 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:17:33.121 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:17:34.056 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.056 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.056 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.056 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.056 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.056 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:34.056 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.056 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.056 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.056 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:34.056 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:34.314 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:34.314 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:34.314 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:34.314 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:34.314 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.314 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:34.314 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.314 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.314 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.314 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.574 request: 00:17:34.574 { 00:17:34.574 "name": "nvme0", 00:17:34.574 "trtype": "tcp", 00:17:34.574 "traddr": "10.0.0.2", 00:17:34.574 "adrfam": "ipv4", 00:17:34.574 "trsvcid": "4420", 00:17:34.574 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:34.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:34.574 "prchk_reftag": false, 00:17:34.574 "prchk_guard": false, 00:17:34.574 "hdgst": false, 00:17:34.574 "ddgst": false, 00:17:34.574 "dhchap_key": "key3", 00:17:34.574 "allow_unrecognized_csi": false, 00:17:34.574 "method": "bdev_nvme_attach_controller", 00:17:34.574 "req_id": 1 00:17:34.574 } 00:17:34.574 Got JSON-RPC error response 00:17:34.574 response: 00:17:34.574 { 00:17:34.574 "code": -5, 00:17:34.574 "message": "Input/output error" 00:17:34.574 } 00:17:34.574 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:34.574 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:34.574 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:34.574 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:34.574 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:34.574 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:34.574 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:34.574 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:34.834 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:34.834 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:34.834 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:34.834 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:34.834 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.834 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:34.834 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.834 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.834 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.834 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.092 request: 00:17:35.092 { 00:17:35.092 "name": "nvme0", 00:17:35.092 "trtype": "tcp", 00:17:35.092 "traddr": "10.0.0.2", 00:17:35.092 "adrfam": "ipv4", 00:17:35.092 "trsvcid": "4420", 00:17:35.092 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:35.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:35.092 "prchk_reftag": false, 00:17:35.092 "prchk_guard": false, 00:17:35.092 "hdgst": false, 00:17:35.092 "ddgst": false, 00:17:35.092 "dhchap_key": "key3", 00:17:35.092 "allow_unrecognized_csi": false, 00:17:35.092 "method": "bdev_nvme_attach_controller", 00:17:35.092 "req_id": 1 00:17:35.092 } 00:17:35.092 Got JSON-RPC error response 00:17:35.092 response: 00:17:35.092 { 00:17:35.092 "code": -5, 00:17:35.092 "message": "Input/output error" 00:17:35.092 } 00:17:35.092 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:35.092 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.092 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.092 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.092 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:35.092 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:35.092 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:35.092 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.092 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.092 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:35.659 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:35.917 request: 00:17:35.917 { 00:17:35.917 "name": "nvme0", 00:17:35.917 "trtype": "tcp", 00:17:35.917 "traddr": "10.0.0.2", 00:17:35.917 "adrfam": "ipv4", 00:17:35.917 "trsvcid": "4420", 00:17:35.917 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:35.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:35.917 "prchk_reftag": false, 00:17:35.917 "prchk_guard": false, 00:17:35.917 "hdgst": false, 00:17:35.917 "ddgst": false, 00:17:35.917 "dhchap_key": "key0", 00:17:35.917 "dhchap_ctrlr_key": "key1", 00:17:35.917 "allow_unrecognized_csi": false, 00:17:35.917 "method": "bdev_nvme_attach_controller", 00:17:35.917 "req_id": 1 00:17:35.917 } 00:17:35.917 Got JSON-RPC error response 00:17:35.917 response: 00:17:35.917 { 00:17:35.917 "code": -5, 00:17:35.917 "message": "Input/output error" 00:17:35.917 } 00:17:35.917 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:35.917 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.917 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.917 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:36.175 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:36.175 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:36.175 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:36.433 nvme0n1 00:17:36.433 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:36.433 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:36.433 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.691 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.691 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.691 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.949 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:36.949 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.949 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.949 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.949 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:36.949 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:36.949 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:38.323 nvme0n1 00:17:38.323 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:38.323 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:38.323 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.582 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.582 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:38.582 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.582 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.582 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.582 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:38.582 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:38.582 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.840 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.840 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:17:38.840 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: --dhchap-ctrl-secret DHHC-1:03:MDc0YTNlNjE4NTM5YmUwYzI4ZTc3ODE0ZTM1Y2Q0NWUzZDc2Y2YwZTMzN2NjMjIwMTI1YTVkOWQ0MDkzMmJiNfbT4SM=: 00:17:39.774 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:39.774 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:39.774 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:39.774 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:39.774 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:39.774 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:39.774 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:39.774 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.774 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.033 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:40.033 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:40.033 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:40.033 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:40.033 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.033 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:40.033 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.033 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:40.033 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:40.033 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:40.968 request: 00:17:40.968 { 00:17:40.968 "name": "nvme0", 00:17:40.968 "trtype": "tcp", 00:17:40.968 "traddr": "10.0.0.2", 00:17:40.968 "adrfam": "ipv4", 00:17:40.968 "trsvcid": "4420", 00:17:40.968 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:40.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:40.968 "prchk_reftag": false, 00:17:40.968 "prchk_guard": false, 00:17:40.968 "hdgst": false, 00:17:40.968 "ddgst": false, 00:17:40.968 "dhchap_key": "key1", 00:17:40.968 "allow_unrecognized_csi": false, 00:17:40.968 "method": "bdev_nvme_attach_controller", 00:17:40.968 "req_id": 1 00:17:40.968 } 00:17:40.968 Got JSON-RPC error response 00:17:40.968 response: 00:17:40.968 { 00:17:40.968 "code": -5, 00:17:40.968 "message": "Input/output error" 00:17:40.968 } 00:17:40.968 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:40.968 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:40.968 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:40.968 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:40.968 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:40.968 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:40.968 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:42.343 nvme0n1 00:17:42.343 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:42.343 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:42.343 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.602 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.602 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.602 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.860 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:42.860 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.860 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.860 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.860 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:42.860 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:42.860 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:43.118 nvme0n1 00:17:43.118 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:43.118 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.118 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:43.376 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.376 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.376 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.942 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:43.942 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.942 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.942 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.942 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: '' 2s 00:17:43.942 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:43.943 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:43.943 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: 00:17:43.943 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:43.943 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:43.943 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:43.943 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: ]] 00:17:43.943 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YWVkNjZmNjU3MThkOWVmOWQ3OWQyYTRiOTgwN2FlZWTVMJfQ: 00:17:43.943 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:43.943 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:43.943 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: 2s 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: ]] 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTIwOTc5OTdhYzA5YzJkOTRkMzA2NjE2YWRhNjJlMjI2OTc1YmQ1ZDg3M2FlMTdmZxX61A==: 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:45.841 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:47.742 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:47.742 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:47.742 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:47.742 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:47.742 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:47.742 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:47.742 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:47.742 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.000 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:48.000 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.000 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.000 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.000 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:48.000 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:48.000 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:49.376 nvme0n1 00:17:49.376 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:49.376 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.376 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.376 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.376 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:49.376 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:50.310 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:50.310 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:50.310 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.569 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.569 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:50.569 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.569 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.569 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.569 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:50.569 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:50.827 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:50.827 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:50.827 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.085 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.085 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:51.085 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.085 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.085 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.085 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:51.085 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:51.085 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:51.085 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:51.085 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.085 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:51.085 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.085 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:51.085 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:52.022 request: 00:17:52.022 { 00:17:52.022 "name": "nvme0", 00:17:52.022 "dhchap_key": "key1", 00:17:52.022 "dhchap_ctrlr_key": "key3", 00:17:52.022 "method": "bdev_nvme_set_keys", 00:17:52.022 "req_id": 1 00:17:52.022 } 00:17:52.022 Got JSON-RPC error response 00:17:52.022 response: 00:17:52.022 { 00:17:52.022 "code": -13, 00:17:52.022 "message": "Permission denied" 00:17:52.022 } 00:17:52.022 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:52.022 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:52.022 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:52.022 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:52.022 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:52.022 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:52.022 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.022 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:52.022 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:53.398 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:53.398 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:53.398 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.398 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:53.398 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:53.398 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.398 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.398 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.398 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:53.398 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:53.398 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:54.773 nvme0n1 00:17:54.773 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.773 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.773 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.773 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.773 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:54.773 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:54.773 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:54.773 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:54.773 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.773 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:54.773 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.773 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:54.773 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:55.706 request: 00:17:55.706 { 00:17:55.706 "name": "nvme0", 00:17:55.706 "dhchap_key": "key2", 00:17:55.706 "dhchap_ctrlr_key": "key0", 00:17:55.706 "method": "bdev_nvme_set_keys", 00:17:55.706 "req_id": 1 00:17:55.706 } 00:17:55.706 Got JSON-RPC error response 00:17:55.706 response: 00:17:55.706 { 00:17:55.706 "code": -13, 00:17:55.706 "message": "Permission denied" 00:17:55.706 } 00:17:55.706 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:55.706 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.706 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.706 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.706 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:55.706 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:55.706 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.964 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:55.964 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:56.898 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:56.898 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:56.898 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.156 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:57.156 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:57.156 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:57.156 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1008867 00:17:57.156 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1008867 ']' 00:17:57.156 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1008867 00:17:57.156 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:57.156 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.156 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1008867 00:17:57.156 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:57.156 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:57.156 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1008867' 00:17:57.156 killing process with pid 1008867 00:17:57.156 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1008867 00:17:57.156 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1008867 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:57.723 rmmod nvme_tcp 00:17:57.723 rmmod nvme_fabrics 00:17:57.723 rmmod nvme_keyring 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1031729 ']' 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1031729 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1031729 ']' 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1031729 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1031729 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1031729' 00:17:57.723 killing process with pid 1031729 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1031729 00:17:57.723 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1031729 00:17:57.982 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:57.982 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:57.982 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:57.982 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:57.982 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:57.982 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:57.982 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:57.982 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:57.982 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:57.982 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.982 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.982 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.890 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:59.890 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.L4M /tmp/spdk.key-sha256.v5q /tmp/spdk.key-sha384.qlK /tmp/spdk.key-sha512.CFI /tmp/spdk.key-sha512.LJT /tmp/spdk.key-sha384.DuK /tmp/spdk.key-sha256.I1R '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:59.890 00:17:59.890 real 3m31.636s 00:17:59.890 user 8m16.814s 00:17:59.890 sys 0m28.478s 00:17:59.890 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.890 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.890 ************************************ 00:17:59.890 END TEST nvmf_auth_target 00:17:59.890 ************************************ 00:17:59.890 12:39:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:59.890 12:39:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:59.890 12:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:59.890 12:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.890 12:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:59.890 ************************************ 00:17:59.890 START TEST nvmf_bdevio_no_huge 00:17:59.890 ************************************ 00:17:59.890 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:00.149 * Looking for test storage... 00:18:00.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.149 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:00.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.149 --rc genhtml_branch_coverage=1 00:18:00.149 --rc genhtml_function_coverage=1 00:18:00.149 --rc genhtml_legend=1 00:18:00.149 --rc geninfo_all_blocks=1 00:18:00.150 --rc geninfo_unexecuted_blocks=1 00:18:00.150 00:18:00.150 ' 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:00.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.150 --rc genhtml_branch_coverage=1 00:18:00.150 --rc genhtml_function_coverage=1 00:18:00.150 --rc genhtml_legend=1 00:18:00.150 --rc geninfo_all_blocks=1 00:18:00.150 --rc geninfo_unexecuted_blocks=1 00:18:00.150 00:18:00.150 ' 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:00.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.150 --rc genhtml_branch_coverage=1 00:18:00.150 --rc genhtml_function_coverage=1 00:18:00.150 --rc genhtml_legend=1 00:18:00.150 --rc geninfo_all_blocks=1 00:18:00.150 --rc geninfo_unexecuted_blocks=1 00:18:00.150 00:18:00.150 ' 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:00.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.150 --rc genhtml_branch_coverage=1 00:18:00.150 --rc genhtml_function_coverage=1 00:18:00.150 --rc genhtml_legend=1 00:18:00.150 --rc geninfo_all_blocks=1 00:18:00.150 --rc geninfo_unexecuted_blocks=1 00:18:00.150 00:18:00.150 ' 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:00.150 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:02.687 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:02.687 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:02.687 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:02.687 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:02.687 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:02.687 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:02.687 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:02.687 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:02.688 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:02.688 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:02.688 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:02.688 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:02.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:18:02.688 00:18:02.688 --- 10.0.0.2 ping statistics --- 00:18:02.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.688 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:02.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:18:02.688 00:18:02.688 --- 10.0.0.1 ping statistics --- 00:18:02.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.688 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.688 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1036977 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1036977 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1036977 ']' 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:02.689 [2024-11-15 12:39:42.663924] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:18:02.689 [2024-11-15 12:39:42.664047] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:02.689 [2024-11-15 12:39:42.744606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:02.689 [2024-11-15 12:39:42.799582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.689 [2024-11-15 12:39:42.799638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.689 [2024-11-15 12:39:42.799662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.689 [2024-11-15 12:39:42.799673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.689 [2024-11-15 12:39:42.799683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.689 [2024-11-15 12:39:42.800810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:02.689 [2024-11-15 12:39:42.800866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:02.689 [2024-11-15 12:39:42.800920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:02.689 [2024-11-15 12:39:42.800924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:02.689 [2024-11-15 12:39:42.960970] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:02.689 Malloc0 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.689 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:02.689 [2024-11-15 12:39:42.999129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.689 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.689 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:02.689 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:02.689 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:02.689 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:02.689 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:02.689 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:02.689 { 00:18:02.689 "params": { 00:18:02.689 "name": "Nvme$subsystem", 00:18:02.689 "trtype": "$TEST_TRANSPORT", 00:18:02.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:02.689 "adrfam": "ipv4", 00:18:02.689 "trsvcid": "$NVMF_PORT", 00:18:02.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:02.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:02.689 "hdgst": ${hdgst:-false}, 00:18:02.689 "ddgst": ${ddgst:-false} 00:18:02.689 }, 00:18:02.689 "method": "bdev_nvme_attach_controller" 00:18:02.689 } 00:18:02.689 EOF 00:18:02.689 )") 00:18:02.689 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:02.689 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:02.689 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:02.689 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:02.689 "params": { 00:18:02.689 "name": "Nvme1", 00:18:02.689 "trtype": "tcp", 00:18:02.689 "traddr": "10.0.0.2", 00:18:02.689 "adrfam": "ipv4", 00:18:02.689 "trsvcid": "4420", 00:18:02.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.689 "hdgst": false, 00:18:02.689 "ddgst": false 00:18:02.689 }, 00:18:02.689 "method": "bdev_nvme_attach_controller" 00:18:02.689 }' 00:18:02.948 [2024-11-15 12:39:43.051100] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:18:02.948 [2024-11-15 12:39:43.051186] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1037007 ] 00:18:02.948 [2024-11-15 12:39:43.125199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:02.948 [2024-11-15 12:39:43.189455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.948 [2024-11-15 12:39:43.189508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.948 [2024-11-15 12:39:43.189512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.206 I/O targets: 00:18:03.206 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:03.206 00:18:03.206 00:18:03.206 CUnit - A unit testing framework for C - Version 2.1-3 00:18:03.206 http://cunit.sourceforge.net/ 00:18:03.206 00:18:03.206 00:18:03.206 Suite: bdevio tests on: Nvme1n1 00:18:03.472 Test: blockdev write read block ...passed 00:18:03.472 Test: blockdev write zeroes read block ...passed 00:18:03.472 Test: blockdev write zeroes read no split ...passed 00:18:03.472 Test: blockdev write zeroes read split ...passed 00:18:03.472 Test: blockdev write zeroes read split partial ...passed 00:18:03.472 Test: blockdev reset ...[2024-11-15 12:39:43.664227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:03.472 [2024-11-15 12:39:43.664334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f46e0 (9): Bad file descriptor 00:18:03.472 [2024-11-15 12:39:43.723749] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:03.472 passed 00:18:03.472 Test: blockdev write read 8 blocks ...passed 00:18:03.472 Test: blockdev write read size > 128k ...passed 00:18:03.472 Test: blockdev write read invalid size ...passed 00:18:03.472 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:03.472 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:03.472 Test: blockdev write read max offset ...passed 00:18:03.730 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:03.730 Test: blockdev writev readv 8 blocks ...passed 00:18:03.730 Test: blockdev writev readv 30 x 1block ...passed 00:18:03.730 Test: blockdev writev readv block ...passed 00:18:03.730 Test: blockdev writev readv size > 128k ...passed 00:18:03.730 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:03.730 Test: blockdev comparev and writev ...[2024-11-15 12:39:43.937919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:03.730 [2024-11-15 12:39:43.937955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.730 [2024-11-15 12:39:43.937980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:03.730 [2024-11-15 12:39:43.937997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.730 [2024-11-15 12:39:43.938374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:03.730 [2024-11-15 12:39:43.938399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:03.730 [2024-11-15 12:39:43.938421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:03.730 [2024-11-15 12:39:43.938437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:03.730 [2024-11-15 12:39:43.938820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:03.730 [2024-11-15 12:39:43.938846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:03.730 [2024-11-15 12:39:43.938870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:03.730 [2024-11-15 12:39:43.938886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:03.730 [2024-11-15 12:39:43.939257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:03.730 [2024-11-15 12:39:43.939282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:03.730 [2024-11-15 12:39:43.939305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:03.730 [2024-11-15 12:39:43.939322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:03.730 passed 00:18:03.730 Test: blockdev nvme passthru rw ...passed 00:18:03.730 Test: blockdev nvme passthru vendor specific ...[2024-11-15 12:39:44.021019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:03.730 [2024-11-15 12:39:44.021046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:03.730 [2024-11-15 12:39:44.021184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:03.730 [2024-11-15 12:39:44.021208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:03.730 [2024-11-15 12:39:44.021340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:03.730 [2024-11-15 12:39:44.021363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:03.730 [2024-11-15 12:39:44.021499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:03.730 [2024-11-15 12:39:44.021522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:03.730 passed 00:18:03.730 Test: blockdev nvme admin passthru ...passed 00:18:03.988 Test: blockdev copy ...passed 00:18:03.988 00:18:03.988 Run Summary: Type Total Ran Passed Failed Inactive 00:18:03.988 suites 1 1 n/a 0 0 00:18:03.988 tests 23 23 23 0 0 00:18:03.988 asserts 152 152 152 0 n/a 00:18:03.988 00:18:03.988 Elapsed time = 1.063 seconds 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:04.247 rmmod nvme_tcp 00:18:04.247 rmmod nvme_fabrics 00:18:04.247 rmmod nvme_keyring 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1036977 ']' 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1036977 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1036977 ']' 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1036977 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1036977 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1036977' 00:18:04.247 killing process with pid 1036977 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1036977 00:18:04.247 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1036977 00:18:04.816 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:04.816 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:04.816 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:04.816 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:04.816 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:04.816 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:04.816 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:04.816 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:04.816 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:04.816 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.816 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.816 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.723 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:06.723 00:18:06.723 real 0m6.720s 00:18:06.723 user 0m11.454s 00:18:06.723 sys 0m2.611s 00:18:06.723 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.723 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:06.723 ************************************ 00:18:06.723 END TEST nvmf_bdevio_no_huge 00:18:06.723 ************************************ 00:18:06.723 12:39:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:06.723 12:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:06.723 12:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.723 12:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:06.723 ************************************ 00:18:06.723 START TEST nvmf_tls 00:18:06.723 ************************************ 00:18:06.723 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:06.723 * Looking for test storage... 00:18:06.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:06.723 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:06.723 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:18:06.723 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:06.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.983 --rc genhtml_branch_coverage=1 00:18:06.983 --rc genhtml_function_coverage=1 00:18:06.983 --rc genhtml_legend=1 00:18:06.983 --rc geninfo_all_blocks=1 00:18:06.983 --rc geninfo_unexecuted_blocks=1 00:18:06.983 00:18:06.983 ' 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:06.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.983 --rc genhtml_branch_coverage=1 00:18:06.983 --rc genhtml_function_coverage=1 00:18:06.983 --rc genhtml_legend=1 00:18:06.983 --rc geninfo_all_blocks=1 00:18:06.983 --rc geninfo_unexecuted_blocks=1 00:18:06.983 00:18:06.983 ' 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:06.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.983 --rc genhtml_branch_coverage=1 00:18:06.983 --rc genhtml_function_coverage=1 00:18:06.983 --rc genhtml_legend=1 00:18:06.983 --rc geninfo_all_blocks=1 00:18:06.983 --rc geninfo_unexecuted_blocks=1 00:18:06.983 00:18:06.983 ' 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:06.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.983 --rc genhtml_branch_coverage=1 00:18:06.983 --rc genhtml_function_coverage=1 00:18:06.983 --rc genhtml_legend=1 00:18:06.983 --rc geninfo_all_blocks=1 00:18:06.983 --rc geninfo_unexecuted_blocks=1 00:18:06.983 00:18:06.983 ' 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.983 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:06.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:06.984 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.517 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:09.517 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:09.517 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:09.517 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:09.517 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:09.518 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:09.518 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:09.518 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:09.518 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:09.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:18:09.518 00:18:09.518 --- 10.0.0.2 ping statistics --- 00:18:09.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.518 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:09.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:18:09.518 00:18:09.518 --- 10.0.0.1 ping statistics --- 00:18:09.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.518 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:09.518 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1039205 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1039205 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1039205 ']' 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.519 [2024-11-15 12:39:49.473104] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:18:09.519 [2024-11-15 12:39:49.473179] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.519 [2024-11-15 12:39:49.545392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.519 [2024-11-15 12:39:49.601112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.519 [2024-11-15 12:39:49.601163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.519 [2024-11-15 12:39:49.601187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.519 [2024-11-15 12:39:49.601198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.519 [2024-11-15 12:39:49.601207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.519 [2024-11-15 12:39:49.601838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:09.519 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:09.777 true 00:18:09.777 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:09.777 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:10.035 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:10.035 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:10.035 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:10.293 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:10.293 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:10.551 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:10.551 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:10.551 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:10.810 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:10.810 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:11.082 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:11.082 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:11.082 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:11.082 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:11.354 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:11.354 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:11.354 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:11.635 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:11.635 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:11.897 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:11.897 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:11.897 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:12.156 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:12.156 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:12.414 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:12.414 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:12.414 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:12.414 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:12.414 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:12.414 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:12.414 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:12.414 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:12.414 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.WNswyHnNPZ 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.ctiDL7bCbM 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.WNswyHnNPZ 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.ctiDL7bCbM 00:18:12.673 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:12.932 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:13.190 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.WNswyHnNPZ 00:18:13.190 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.WNswyHnNPZ 00:18:13.190 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:13.448 [2024-11-15 12:39:53.716405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.448 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:13.706 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:13.965 [2024-11-15 12:39:54.261882] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:13.965 [2024-11-15 12:39:54.262133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.965 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:14.223 malloc0 00:18:14.223 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:14.789 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.WNswyHnNPZ 00:18:14.789 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:15.047 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.WNswyHnNPZ 00:18:27.248 Initializing NVMe Controllers 00:18:27.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:27.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:27.248 Initialization complete. Launching workers. 00:18:27.248 ======================================================== 00:18:27.249 Latency(us) 00:18:27.249 Device Information : IOPS MiB/s Average min max 00:18:27.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8659.02 33.82 7393.04 1029.95 9017.58 00:18:27.249 ======================================================== 00:18:27.249 Total : 8659.02 33.82 7393.04 1029.95 9017.58 00:18:27.249 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WNswyHnNPZ 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WNswyHnNPZ 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1041115 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1041115 /var/tmp/bdevperf.sock 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1041115 ']' 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.249 [2024-11-15 12:40:05.509222] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:18:27.249 [2024-11-15 12:40:05.509289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041115 ] 00:18:27.249 [2024-11-15 12:40:05.573562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.249 [2024-11-15 12:40:05.630005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.249 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WNswyHnNPZ 00:18:27.249 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.249 [2024-11-15 12:40:06.310922] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.249 TLSTESTn1 00:18:27.249 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:27.249 Running I/O for 10 seconds... 00:18:28.625 3500.00 IOPS, 13.67 MiB/s [2024-11-15T11:40:09.905Z] 3533.50 IOPS, 13.80 MiB/s [2024-11-15T11:40:10.842Z] 3582.33 IOPS, 13.99 MiB/s [2024-11-15T11:40:11.777Z] 3580.00 IOPS, 13.98 MiB/s [2024-11-15T11:40:12.711Z] 3586.20 IOPS, 14.01 MiB/s [2024-11-15T11:40:13.645Z] 3558.83 IOPS, 13.90 MiB/s [2024-11-15T11:40:14.579Z] 3570.29 IOPS, 13.95 MiB/s [2024-11-15T11:40:15.953Z] 3579.50 IOPS, 13.98 MiB/s [2024-11-15T11:40:16.888Z] 3581.78 IOPS, 13.99 MiB/s [2024-11-15T11:40:16.888Z] 3588.20 IOPS, 14.02 MiB/s 00:18:36.544 Latency(us) 00:18:36.544 [2024-11-15T11:40:16.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.544 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:36.544 Verification LBA range: start 0x0 length 0x2000 00:18:36.544 TLSTESTn1 : 10.02 3593.50 14.04 0.00 0.00 35560.73 6262.33 31845.64 00:18:36.544 [2024-11-15T11:40:16.888Z] =================================================================================================================== 00:18:36.544 [2024-11-15T11:40:16.888Z] Total : 3593.50 14.04 0.00 0.00 35560.73 6262.33 31845.64 00:18:36.544 { 00:18:36.544 "results": [ 00:18:36.544 { 00:18:36.544 "job": "TLSTESTn1", 00:18:36.544 "core_mask": "0x4", 00:18:36.544 "workload": "verify", 00:18:36.544 "status": "finished", 00:18:36.544 "verify_range": { 00:18:36.544 "start": 0, 00:18:36.544 "length": 8192 00:18:36.544 }, 00:18:36.544 "queue_depth": 128, 00:18:36.544 "io_size": 4096, 00:18:36.544 "runtime": 10.020325, 00:18:36.544 "iops": 3593.4962189350144, 00:18:36.544 "mibps": 14.0370946052149, 00:18:36.544 "io_failed": 0, 00:18:36.544 "io_timeout": 0, 00:18:36.544 "avg_latency_us": 35560.729356274736, 00:18:36.544 "min_latency_us": 6262.328888888889, 00:18:36.544 "max_latency_us": 31845.64148148148 00:18:36.544 } 00:18:36.544 ], 00:18:36.544 "core_count": 1 00:18:36.544 } 00:18:36.544 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:36.544 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1041115 00:18:36.544 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1041115 ']' 00:18:36.544 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1041115 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1041115 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1041115' 00:18:36.545 killing process with pid 1041115 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1041115 00:18:36.545 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.545 00:18:36.545 Latency(us) 00:18:36.545 [2024-11-15T11:40:16.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.545 [2024-11-15T11:40:16.889Z] =================================================================================================================== 00:18:36.545 [2024-11-15T11:40:16.889Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1041115 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ctiDL7bCbM 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ctiDL7bCbM 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ctiDL7bCbM 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ctiDL7bCbM 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1042432 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1042432 /var/tmp/bdevperf.sock 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1042432 ']' 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.545 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.804 [2024-11-15 12:40:16.897904] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:18:36.804 [2024-11-15 12:40:16.898008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042432 ] 00:18:36.804 [2024-11-15 12:40:16.966213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.804 [2024-11-15 12:40:17.023220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.804 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.804 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:36.804 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ctiDL7bCbM 00:18:37.063 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:37.321 [2024-11-15 12:40:17.649357] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:37.321 [2024-11-15 12:40:17.655651] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:37.321 [2024-11-15 12:40:17.656438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1d2c0 (107): Transport endpoint is not connected 00:18:37.321 [2024-11-15 12:40:17.657431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1d2c0 (9): Bad file descriptor 00:18:37.321 [2024-11-15 12:40:17.658431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:37.321 [2024-11-15 12:40:17.658452] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:37.321 [2024-11-15 12:40:17.658476] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:37.321 [2024-11-15 12:40:17.658494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:37.321 request: 00:18:37.321 { 00:18:37.321 "name": "TLSTEST", 00:18:37.321 "trtype": "tcp", 00:18:37.321 "traddr": "10.0.0.2", 00:18:37.321 "adrfam": "ipv4", 00:18:37.321 "trsvcid": "4420", 00:18:37.321 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.321 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.321 "prchk_reftag": false, 00:18:37.321 "prchk_guard": false, 00:18:37.321 "hdgst": false, 00:18:37.321 "ddgst": false, 00:18:37.321 "psk": "key0", 00:18:37.321 "allow_unrecognized_csi": false, 00:18:37.321 "method": "bdev_nvme_attach_controller", 00:18:37.321 "req_id": 1 00:18:37.321 } 00:18:37.321 Got JSON-RPC error response 00:18:37.321 response: 00:18:37.321 { 00:18:37.321 "code": -5, 00:18:37.321 "message": "Input/output error" 00:18:37.321 } 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1042432 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1042432 ']' 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1042432 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1042432 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1042432' 00:18:37.580 killing process with pid 1042432 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1042432 00:18:37.580 Received shutdown signal, test time was about 10.000000 seconds 00:18:37.580 00:18:37.580 Latency(us) 00:18:37.580 [2024-11-15T11:40:17.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.580 [2024-11-15T11:40:17.924Z] =================================================================================================================== 00:18:37.580 [2024-11-15T11:40:17.924Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1042432 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WNswyHnNPZ 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WNswyHnNPZ 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WNswyHnNPZ 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WNswyHnNPZ 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1042548 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1042548 /var/tmp/bdevperf.sock 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1042548 ']' 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.580 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.840 [2024-11-15 12:40:17.962809] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:18:37.840 [2024-11-15 12:40:17.962912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042548 ] 00:18:37.840 [2024-11-15 12:40:18.027747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.840 [2024-11-15 12:40:18.084162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.099 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.099 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:38.099 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WNswyHnNPZ 00:18:38.358 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:38.617 [2024-11-15 12:40:18.708088] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.617 [2024-11-15 12:40:18.718308] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:38.617 [2024-11-15 12:40:18.718339] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:38.617 [2024-11-15 12:40:18.718389] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:38.617 [2024-11-15 12:40:18.719144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x78b2c0 (107): Transport endpoint is not connected 00:18:38.617 [2024-11-15 12:40:18.720136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x78b2c0 (9): Bad file descriptor 00:18:38.617 [2024-11-15 12:40:18.721135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:38.617 [2024-11-15 12:40:18.721155] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:38.617 [2024-11-15 12:40:18.721167] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:38.617 [2024-11-15 12:40:18.721185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:38.617 request: 00:18:38.617 { 00:18:38.617 "name": "TLSTEST", 00:18:38.617 "trtype": "tcp", 00:18:38.617 "traddr": "10.0.0.2", 00:18:38.617 "adrfam": "ipv4", 00:18:38.617 "trsvcid": "4420", 00:18:38.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.617 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:38.617 "prchk_reftag": false, 00:18:38.617 "prchk_guard": false, 00:18:38.617 "hdgst": false, 00:18:38.617 "ddgst": false, 00:18:38.617 "psk": "key0", 00:18:38.617 "allow_unrecognized_csi": false, 00:18:38.617 "method": "bdev_nvme_attach_controller", 00:18:38.617 "req_id": 1 00:18:38.617 } 00:18:38.617 Got JSON-RPC error response 00:18:38.617 response: 00:18:38.617 { 00:18:38.617 "code": -5, 00:18:38.617 "message": "Input/output error" 00:18:38.617 } 00:18:38.617 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1042548 00:18:38.617 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1042548 ']' 00:18:38.617 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1042548 00:18:38.617 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:38.617 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.617 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1042548 00:18:38.617 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:38.617 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:38.617 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1042548' 00:18:38.617 killing process with pid 1042548 00:18:38.617 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1042548 00:18:38.617 Received shutdown signal, test time was about 10.000000 seconds 00:18:38.617 00:18:38.617 Latency(us) 00:18:38.617 [2024-11-15T11:40:18.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.617 [2024-11-15T11:40:18.961Z] =================================================================================================================== 00:18:38.617 [2024-11-15T11:40:18.961Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:38.617 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1042548 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WNswyHnNPZ 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WNswyHnNPZ 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WNswyHnNPZ 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WNswyHnNPZ 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:38.876 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1042623 00:18:38.877 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:38.877 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:38.877 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1042623 /var/tmp/bdevperf.sock 00:18:38.877 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1042623 ']' 00:18:38.877 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.877 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.877 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.877 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.877 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.877 [2024-11-15 12:40:19.021304] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:18:38.877 [2024-11-15 12:40:19.021417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042623 ] 00:18:38.877 [2024-11-15 12:40:19.092572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.877 [2024-11-15 12:40:19.151498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.136 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.136 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:39.136 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WNswyHnNPZ 00:18:39.394 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:39.653 [2024-11-15 12:40:19.773205] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:39.653 [2024-11-15 12:40:19.780990] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:39.653 [2024-11-15 12:40:19.781035] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:39.653 [2024-11-15 12:40:19.781095] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:39.653 [2024-11-15 12:40:19.781179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4c2c0 (107): Transport endpoint is not connected 00:18:39.653 [2024-11-15 12:40:19.782169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4c2c0 (9): Bad file descriptor 00:18:39.653 [2024-11-15 12:40:19.783168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:39.653 [2024-11-15 12:40:19.783188] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:39.653 [2024-11-15 12:40:19.783201] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:39.653 [2024-11-15 12:40:19.783219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:39.653 request: 00:18:39.653 { 00:18:39.653 "name": "TLSTEST", 00:18:39.653 "trtype": "tcp", 00:18:39.653 "traddr": "10.0.0.2", 00:18:39.653 "adrfam": "ipv4", 00:18:39.653 "trsvcid": "4420", 00:18:39.653 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:39.653 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.653 "prchk_reftag": false, 00:18:39.653 "prchk_guard": false, 00:18:39.653 "hdgst": false, 00:18:39.653 "ddgst": false, 00:18:39.653 "psk": "key0", 00:18:39.653 "allow_unrecognized_csi": false, 00:18:39.653 "method": "bdev_nvme_attach_controller", 00:18:39.653 "req_id": 1 00:18:39.653 } 00:18:39.653 Got JSON-RPC error response 00:18:39.653 response: 00:18:39.653 { 00:18:39.653 "code": -5, 00:18:39.653 "message": "Input/output error" 00:18:39.653 } 00:18:39.653 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1042623 00:18:39.653 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1042623 ']' 00:18:39.653 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1042623 00:18:39.653 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:39.653 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.653 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1042623 00:18:39.653 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:39.654 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:39.654 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1042623' 00:18:39.654 killing process with pid 1042623 00:18:39.654 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1042623 00:18:39.654 Received shutdown signal, test time was about 10.000000 seconds 00:18:39.654 00:18:39.654 Latency(us) 00:18:39.654 [2024-11-15T11:40:19.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.654 [2024-11-15T11:40:19.998Z] =================================================================================================================== 00:18:39.654 [2024-11-15T11:40:19.998Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:39.654 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1042623 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1042740 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1042740 /var/tmp/bdevperf.sock 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1042740 ']' 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.913 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.913 [2024-11-15 12:40:20.114902] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:18:39.913 [2024-11-15 12:40:20.114990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042740 ] 00:18:39.913 [2024-11-15 12:40:20.207159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.172 [2024-11-15 12:40:20.289473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.172 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.172 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:40.172 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:40.430 [2024-11-15 12:40:20.734730] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:40.430 [2024-11-15 12:40:20.734780] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:40.430 request: 00:18:40.430 { 00:18:40.430 "name": "key0", 00:18:40.430 "path": "", 00:18:40.430 "method": "keyring_file_add_key", 00:18:40.430 "req_id": 1 00:18:40.430 } 00:18:40.430 Got JSON-RPC error response 00:18:40.430 response: 00:18:40.430 { 00:18:40.430 "code": -1, 00:18:40.430 "message": "Operation not permitted" 00:18:40.430 } 00:18:40.430 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:40.688 [2024-11-15 12:40:21.003560] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:40.688 [2024-11-15 12:40:21.003622] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:40.689 request: 00:18:40.689 { 00:18:40.689 "name": "TLSTEST", 00:18:40.689 "trtype": "tcp", 00:18:40.689 "traddr": "10.0.0.2", 00:18:40.689 "adrfam": "ipv4", 00:18:40.689 "trsvcid": "4420", 00:18:40.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:40.689 "prchk_reftag": false, 00:18:40.689 "prchk_guard": false, 00:18:40.689 "hdgst": false, 00:18:40.689 "ddgst": false, 00:18:40.689 "psk": "key0", 00:18:40.689 "allow_unrecognized_csi": false, 00:18:40.689 "method": "bdev_nvme_attach_controller", 00:18:40.689 "req_id": 1 00:18:40.689 } 00:18:40.689 Got JSON-RPC error response 00:18:40.689 response: 00:18:40.689 { 00:18:40.689 "code": -126, 00:18:40.689 "message": "Required key not available" 00:18:40.689 } 00:18:40.689 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1042740 00:18:40.689 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1042740 ']' 00:18:40.689 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1042740 00:18:40.689 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:40.689 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.689 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1042740 00:18:40.947 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:40.948 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:40.948 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1042740' 00:18:40.948 killing process with pid 1042740 00:18:40.948 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1042740 00:18:40.948 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.948 00:18:40.948 Latency(us) 00:18:40.948 [2024-11-15T11:40:21.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.948 [2024-11-15T11:40:21.292Z] =================================================================================================================== 00:18:40.948 [2024-11-15T11:40:21.292Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:40.948 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1042740 00:18:40.948 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:40.948 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:40.948 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.948 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.948 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.948 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1039205 00:18:40.948 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1039205 ']' 00:18:40.948 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1039205 00:18:40.948 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:40.948 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.948 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1039205 00:18:41.206 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:41.206 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:41.206 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1039205' 00:18:41.206 killing process with pid 1039205 00:18:41.207 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1039205 00:18:41.207 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1039205 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.SErSMJc6E0 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.SErSMJc6E0 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1043020 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1043020 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1043020 ']' 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.466 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.466 [2024-11-15 12:40:21.648693] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:18:41.466 [2024-11-15 12:40:21.648816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.466 [2024-11-15 12:40:21.721942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.466 [2024-11-15 12:40:21.781159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.466 [2024-11-15 12:40:21.781217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.466 [2024-11-15 12:40:21.781231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.466 [2024-11-15 12:40:21.781241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.466 [2024-11-15 12:40:21.781251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.466 [2024-11-15 12:40:21.781873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.725 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.725 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:41.725 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:41.725 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.725 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.725 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.725 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.SErSMJc6E0 00:18:41.725 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.SErSMJc6E0 00:18:41.725 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:41.983 [2024-11-15 12:40:22.178792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.983 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:42.241 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:42.500 [2024-11-15 12:40:22.716269] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:42.500 [2024-11-15 12:40:22.716502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.500 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:42.759 malloc0 00:18:42.759 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:43.018 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.SErSMJc6E0 00:18:43.277 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SErSMJc6E0 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SErSMJc6E0 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1043307 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1043307 /var/tmp/bdevperf.sock 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1043307 ']' 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.535 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.535 [2024-11-15 12:40:23.854228] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:18:43.535 [2024-11-15 12:40:23.854313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043307 ] 00:18:43.794 [2024-11-15 12:40:23.921088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.794 [2024-11-15 12:40:23.976830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.794 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.794 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:43.794 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SErSMJc6E0 00:18:44.052 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:44.310 [2024-11-15 12:40:24.593290] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.571 TLSTESTn1 00:18:44.571 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:44.571 Running I/O for 10 seconds... 00:18:46.874 3578.00 IOPS, 13.98 MiB/s [2024-11-15T11:40:28.150Z] 3578.00 IOPS, 13.98 MiB/s [2024-11-15T11:40:29.084Z] 3592.33 IOPS, 14.03 MiB/s [2024-11-15T11:40:30.017Z] 3583.25 IOPS, 14.00 MiB/s [2024-11-15T11:40:30.951Z] 3591.80 IOPS, 14.03 MiB/s [2024-11-15T11:40:31.883Z] 3589.17 IOPS, 14.02 MiB/s [2024-11-15T11:40:32.816Z] 3591.00 IOPS, 14.03 MiB/s [2024-11-15T11:40:34.188Z] 3590.88 IOPS, 14.03 MiB/s [2024-11-15T11:40:35.121Z] 3598.22 IOPS, 14.06 MiB/s [2024-11-15T11:40:35.121Z] 3604.70 IOPS, 14.08 MiB/s 00:18:54.777 Latency(us) 00:18:54.777 [2024-11-15T11:40:35.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.777 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:54.777 Verification LBA range: start 0x0 length 0x2000 00:18:54.777 TLSTESTn1 : 10.02 3610.35 14.10 0.00 0.00 35395.79 6043.88 29903.83 00:18:54.777 [2024-11-15T11:40:35.121Z] =================================================================================================================== 00:18:54.777 [2024-11-15T11:40:35.121Z] Total : 3610.35 14.10 0.00 0.00 35395.79 6043.88 29903.83 00:18:54.777 { 00:18:54.777 "results": [ 00:18:54.777 { 00:18:54.777 "job": "TLSTESTn1", 00:18:54.777 "core_mask": "0x4", 00:18:54.777 "workload": "verify", 00:18:54.777 "status": "finished", 00:18:54.777 "verify_range": { 00:18:54.777 "start": 0, 00:18:54.777 "length": 8192 00:18:54.777 }, 00:18:54.777 "queue_depth": 128, 00:18:54.777 "io_size": 4096, 00:18:54.778 "runtime": 10.019258, 00:18:54.778 "iops": 3610.3471933749984, 00:18:54.778 "mibps": 14.102918724121087, 00:18:54.778 "io_failed": 0, 00:18:54.778 "io_timeout": 0, 00:18:54.778 "avg_latency_us": 35395.790477325536, 00:18:54.778 "min_latency_us": 6043.875555555555, 00:18:54.778 "max_latency_us": 29903.834074074075 00:18:54.778 } 00:18:54.778 ], 00:18:54.778 "core_count": 1 00:18:54.778 } 00:18:54.778 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:54.778 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1043307 00:18:54.778 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1043307 ']' 00:18:54.778 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1043307 00:18:54.778 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:54.778 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.778 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1043307 00:18:54.778 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:54.778 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:54.778 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1043307' 00:18:54.778 killing process with pid 1043307 00:18:54.778 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1043307 00:18:54.778 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.778 00:18:54.778 Latency(us) 00:18:54.778 [2024-11-15T11:40:35.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.778 [2024-11-15T11:40:35.122Z] =================================================================================================================== 00:18:54.778 [2024-11-15T11:40:35.122Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:54.778 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1043307 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.SErSMJc6E0 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SErSMJc6E0 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SErSMJc6E0 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SErSMJc6E0 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SErSMJc6E0 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1044620 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1044620 /var/tmp/bdevperf.sock 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1044620 ']' 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.778 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.036 [2024-11-15 12:40:35.167002] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:18:55.036 [2024-11-15 12:40:35.167096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044620 ] 00:18:55.036 [2024-11-15 12:40:35.232180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.036 [2024-11-15 12:40:35.286573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.294 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.294 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:55.294 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SErSMJc6E0 00:18:55.552 [2024-11-15 12:40:35.646238] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SErSMJc6E0': 0100666 00:18:55.552 [2024-11-15 12:40:35.646273] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:55.552 request: 00:18:55.552 { 00:18:55.552 "name": "key0", 00:18:55.552 "path": "/tmp/tmp.SErSMJc6E0", 00:18:55.552 "method": "keyring_file_add_key", 00:18:55.552 "req_id": 1 00:18:55.552 } 00:18:55.552 Got JSON-RPC error response 00:18:55.552 response: 00:18:55.552 { 00:18:55.552 "code": -1, 00:18:55.552 "message": "Operation not permitted" 00:18:55.552 } 00:18:55.552 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:55.811 [2024-11-15 12:40:35.911074] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.811 [2024-11-15 12:40:35.911132] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:55.811 request: 00:18:55.811 { 00:18:55.811 "name": "TLSTEST", 00:18:55.811 "trtype": "tcp", 00:18:55.811 "traddr": "10.0.0.2", 00:18:55.811 "adrfam": "ipv4", 00:18:55.811 "trsvcid": "4420", 00:18:55.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.811 "prchk_reftag": false, 00:18:55.811 "prchk_guard": false, 00:18:55.811 "hdgst": false, 00:18:55.811 "ddgst": false, 00:18:55.811 "psk": "key0", 00:18:55.811 "allow_unrecognized_csi": false, 00:18:55.811 "method": "bdev_nvme_attach_controller", 00:18:55.811 "req_id": 1 00:18:55.811 } 00:18:55.811 Got JSON-RPC error response 00:18:55.811 response: 00:18:55.811 { 00:18:55.811 "code": -126, 00:18:55.811 "message": "Required key not available" 00:18:55.811 } 00:18:55.811 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1044620 00:18:55.811 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1044620 ']' 00:18:55.811 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1044620 00:18:55.811 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:55.811 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.811 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1044620 00:18:55.811 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:55.811 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:55.811 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1044620' 00:18:55.811 killing process with pid 1044620 00:18:55.811 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1044620 00:18:55.811 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.811 00:18:55.811 Latency(us) 00:18:55.811 [2024-11-15T11:40:36.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.811 [2024-11-15T11:40:36.155Z] =================================================================================================================== 00:18:55.811 [2024-11-15T11:40:36.155Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:55.811 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1044620 00:18:56.069 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:56.069 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:56.069 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:56.069 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:56.069 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:56.069 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1043020 00:18:56.069 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1043020 ']' 00:18:56.069 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1043020 00:18:56.069 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:56.069 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.069 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1043020 00:18:56.069 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:56.069 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:56.069 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1043020' 00:18:56.069 killing process with pid 1043020 00:18:56.069 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1043020 00:18:56.070 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1043020 00:18:56.328 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:56.328 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:56.328 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:56.328 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.328 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1044771 00:18:56.328 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:56.328 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1044771 00:18:56.328 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1044771 ']' 00:18:56.328 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.328 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.328 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.328 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.328 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.328 [2024-11-15 12:40:36.500637] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:18:56.328 [2024-11-15 12:40:36.500750] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.328 [2024-11-15 12:40:36.572940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.328 [2024-11-15 12:40:36.631378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.328 [2024-11-15 12:40:36.631438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.328 [2024-11-15 12:40:36.631466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.328 [2024-11-15 12:40:36.631477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.328 [2024-11-15 12:40:36.631486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.328 [2024-11-15 12:40:36.632102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.SErSMJc6E0 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.SErSMJc6E0 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.SErSMJc6E0 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.SErSMJc6E0 00:18:56.586 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:56.844 [2024-11-15 12:40:37.029116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.844 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:57.102 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:57.360 [2024-11-15 12:40:37.582607] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:57.360 [2024-11-15 12:40:37.582887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.360 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:57.618 malloc0 00:18:57.618 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:57.877 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.SErSMJc6E0 00:18:58.135 [2024-11-15 12:40:38.434853] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SErSMJc6E0': 0100666 00:18:58.135 [2024-11-15 12:40:38.434894] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:58.135 request: 00:18:58.135 { 00:18:58.135 "name": "key0", 00:18:58.135 "path": "/tmp/tmp.SErSMJc6E0", 00:18:58.135 "method": "keyring_file_add_key", 00:18:58.135 "req_id": 1 00:18:58.135 } 00:18:58.135 Got JSON-RPC error response 00:18:58.135 response: 00:18:58.135 { 00:18:58.135 "code": -1, 00:18:58.135 "message": "Operation not permitted" 00:18:58.135 } 00:18:58.135 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:58.393 [2024-11-15 12:40:38.727675] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:58.393 [2024-11-15 12:40:38.727765] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:58.393 request: 00:18:58.393 { 00:18:58.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.393 "host": "nqn.2016-06.io.spdk:host1", 00:18:58.393 "psk": "key0", 00:18:58.393 "method": "nvmf_subsystem_add_host", 00:18:58.393 "req_id": 1 00:18:58.393 } 00:18:58.393 Got JSON-RPC error response 00:18:58.393 response: 00:18:58.393 { 00:18:58.393 "code": -32603, 00:18:58.393 "message": "Internal error" 00:18:58.393 } 00:18:58.651 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:58.651 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:58.651 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:58.651 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:58.651 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1044771 00:18:58.651 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1044771 ']' 00:18:58.651 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1044771 00:18:58.651 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:58.651 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.651 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1044771 00:18:58.651 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:58.651 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:58.651 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1044771' 00:18:58.651 killing process with pid 1044771 00:18:58.651 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1044771 00:18:58.651 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1044771 00:18:58.909 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.SErSMJc6E0 00:18:58.909 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:58.909 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:58.909 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.909 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.909 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1045076 00:18:58.909 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:58.909 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1045076 00:18:58.909 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1045076 ']' 00:18:58.909 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.909 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.909 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.909 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.909 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.909 [2024-11-15 12:40:39.088636] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:18:58.909 [2024-11-15 12:40:39.088742] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.909 [2024-11-15 12:40:39.158524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.909 [2024-11-15 12:40:39.213791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.909 [2024-11-15 12:40:39.213848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.909 [2024-11-15 12:40:39.213877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.909 [2024-11-15 12:40:39.213889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.909 [2024-11-15 12:40:39.213900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.909 [2024-11-15 12:40:39.214497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.168 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.168 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:59.168 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:59.168 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:59.168 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.168 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.168 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.SErSMJc6E0 00:18:59.168 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.SErSMJc6E0 00:18:59.168 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:59.426 [2024-11-15 12:40:39.632934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.426 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:59.685 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:59.944 [2024-11-15 12:40:40.282800] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:59.944 [2024-11-15 12:40:40.283139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.202 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:00.461 malloc0 00:19:00.461 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:00.719 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.SErSMJc6E0 00:19:00.977 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:01.235 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1045368 00:19:01.235 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:01.235 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:01.235 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1045368 /var/tmp/bdevperf.sock 00:19:01.235 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1045368 ']' 00:19:01.235 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.235 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.235 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.235 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.235 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.494 [2024-11-15 12:40:41.601792] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:19:01.494 [2024-11-15 12:40:41.601877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045368 ] 00:19:01.494 [2024-11-15 12:40:41.670322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.494 [2024-11-15 12:40:41.727873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.752 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.752 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:01.752 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SErSMJc6E0 00:19:02.010 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:02.269 [2024-11-15 12:40:42.366376] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.269 TLSTESTn1 00:19:02.269 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:02.527 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:02.527 "subsystems": [ 00:19:02.527 { 00:19:02.527 "subsystem": "keyring", 00:19:02.527 "config": [ 00:19:02.527 { 00:19:02.527 "method": "keyring_file_add_key", 00:19:02.527 "params": { 00:19:02.527 "name": "key0", 00:19:02.527 "path": "/tmp/tmp.SErSMJc6E0" 00:19:02.527 } 00:19:02.527 } 00:19:02.527 ] 00:19:02.527 }, 00:19:02.527 { 00:19:02.527 "subsystem": "iobuf", 00:19:02.527 "config": [ 00:19:02.527 { 00:19:02.527 "method": "iobuf_set_options", 00:19:02.527 "params": { 00:19:02.527 "small_pool_count": 8192, 00:19:02.527 "large_pool_count": 1024, 00:19:02.527 "small_bufsize": 8192, 00:19:02.527 "large_bufsize": 135168, 00:19:02.527 "enable_numa": false 00:19:02.527 } 00:19:02.527 } 00:19:02.527 ] 00:19:02.527 }, 00:19:02.527 { 00:19:02.527 "subsystem": "sock", 00:19:02.527 "config": [ 00:19:02.527 { 00:19:02.527 "method": "sock_set_default_impl", 00:19:02.527 "params": { 00:19:02.527 "impl_name": "posix" 00:19:02.527 } 00:19:02.527 }, 00:19:02.527 { 00:19:02.527 "method": "sock_impl_set_options", 00:19:02.527 "params": { 00:19:02.527 "impl_name": "ssl", 00:19:02.527 "recv_buf_size": 4096, 00:19:02.527 "send_buf_size": 4096, 00:19:02.527 "enable_recv_pipe": true, 00:19:02.527 "enable_quickack": false, 00:19:02.527 "enable_placement_id": 0, 00:19:02.527 "enable_zerocopy_send_server": true, 00:19:02.527 "enable_zerocopy_send_client": false, 00:19:02.527 "zerocopy_threshold": 0, 00:19:02.527 "tls_version": 0, 00:19:02.527 "enable_ktls": false 00:19:02.527 } 00:19:02.527 }, 00:19:02.527 { 00:19:02.527 "method": "sock_impl_set_options", 00:19:02.527 "params": { 00:19:02.527 "impl_name": "posix", 00:19:02.527 "recv_buf_size": 2097152, 00:19:02.527 "send_buf_size": 2097152, 00:19:02.527 "enable_recv_pipe": true, 00:19:02.527 "enable_quickack": false, 00:19:02.527 "enable_placement_id": 0, 00:19:02.527 "enable_zerocopy_send_server": true, 00:19:02.527 "enable_zerocopy_send_client": false, 00:19:02.527 "zerocopy_threshold": 0, 00:19:02.527 "tls_version": 0, 00:19:02.527 "enable_ktls": false 00:19:02.527 } 00:19:02.527 } 00:19:02.527 ] 00:19:02.527 }, 00:19:02.527 { 00:19:02.527 "subsystem": "vmd", 00:19:02.527 "config": [] 00:19:02.527 }, 00:19:02.527 { 00:19:02.527 "subsystem": "accel", 00:19:02.527 "config": [ 00:19:02.527 { 00:19:02.527 "method": "accel_set_options", 00:19:02.527 "params": { 00:19:02.527 "small_cache_size": 128, 00:19:02.527 "large_cache_size": 16, 00:19:02.527 "task_count": 2048, 00:19:02.527 "sequence_count": 2048, 00:19:02.527 "buf_count": 2048 00:19:02.527 } 00:19:02.527 } 00:19:02.527 ] 00:19:02.527 }, 00:19:02.527 { 00:19:02.527 "subsystem": "bdev", 00:19:02.528 "config": [ 00:19:02.528 { 00:19:02.528 "method": "bdev_set_options", 00:19:02.528 "params": { 00:19:02.528 "bdev_io_pool_size": 65535, 00:19:02.528 "bdev_io_cache_size": 256, 00:19:02.528 "bdev_auto_examine": true, 00:19:02.528 "iobuf_small_cache_size": 128, 00:19:02.528 "iobuf_large_cache_size": 16 00:19:02.528 } 00:19:02.528 }, 00:19:02.528 { 00:19:02.528 "method": "bdev_raid_set_options", 00:19:02.528 "params": { 00:19:02.528 "process_window_size_kb": 1024, 00:19:02.528 "process_max_bandwidth_mb_sec": 0 00:19:02.528 } 00:19:02.528 }, 00:19:02.528 { 00:19:02.528 "method": "bdev_iscsi_set_options", 00:19:02.528 "params": { 00:19:02.528 "timeout_sec": 30 00:19:02.528 } 00:19:02.528 }, 00:19:02.528 { 00:19:02.528 "method": "bdev_nvme_set_options", 00:19:02.528 "params": { 00:19:02.528 "action_on_timeout": "none", 00:19:02.528 "timeout_us": 0, 00:19:02.528 "timeout_admin_us": 0, 00:19:02.528 "keep_alive_timeout_ms": 10000, 00:19:02.528 "arbitration_burst": 0, 00:19:02.528 "low_priority_weight": 0, 00:19:02.528 "medium_priority_weight": 0, 00:19:02.528 "high_priority_weight": 0, 00:19:02.528 "nvme_adminq_poll_period_us": 10000, 00:19:02.528 "nvme_ioq_poll_period_us": 0, 00:19:02.528 "io_queue_requests": 0, 00:19:02.528 "delay_cmd_submit": true, 00:19:02.528 "transport_retry_count": 4, 00:19:02.528 "bdev_retry_count": 3, 00:19:02.528 "transport_ack_timeout": 0, 00:19:02.528 "ctrlr_loss_timeout_sec": 0, 00:19:02.528 "reconnect_delay_sec": 0, 00:19:02.528 "fast_io_fail_timeout_sec": 0, 00:19:02.528 "disable_auto_failback": false, 00:19:02.528 "generate_uuids": false, 00:19:02.528 "transport_tos": 0, 00:19:02.528 "nvme_error_stat": false, 00:19:02.528 "rdma_srq_size": 0, 00:19:02.528 "io_path_stat": false, 00:19:02.528 "allow_accel_sequence": false, 00:19:02.528 "rdma_max_cq_size": 0, 00:19:02.528 "rdma_cm_event_timeout_ms": 0, 00:19:02.528 "dhchap_digests": [ 00:19:02.528 "sha256", 00:19:02.528 "sha384", 00:19:02.528 "sha512" 00:19:02.528 ], 00:19:02.528 "dhchap_dhgroups": [ 00:19:02.528 "null", 00:19:02.528 "ffdhe2048", 00:19:02.528 "ffdhe3072", 00:19:02.528 "ffdhe4096", 00:19:02.528 "ffdhe6144", 00:19:02.528 "ffdhe8192" 00:19:02.528 ] 00:19:02.528 } 00:19:02.528 }, 00:19:02.528 { 00:19:02.528 "method": "bdev_nvme_set_hotplug", 00:19:02.528 "params": { 00:19:02.528 "period_us": 100000, 00:19:02.528 "enable": false 00:19:02.528 } 00:19:02.528 }, 00:19:02.528 { 00:19:02.528 "method": "bdev_malloc_create", 00:19:02.528 "params": { 00:19:02.528 "name": "malloc0", 00:19:02.528 "num_blocks": 8192, 00:19:02.528 "block_size": 4096, 00:19:02.528 "physical_block_size": 4096, 00:19:02.528 "uuid": "86516372-05b1-47c7-8b3e-5a35f523a8f4", 00:19:02.528 "optimal_io_boundary": 0, 00:19:02.528 "md_size": 0, 00:19:02.528 "dif_type": 0, 00:19:02.528 "dif_is_head_of_md": false, 00:19:02.528 "dif_pi_format": 0 00:19:02.528 } 00:19:02.528 }, 00:19:02.528 { 00:19:02.528 "method": "bdev_wait_for_examine" 00:19:02.528 } 00:19:02.528 ] 00:19:02.528 }, 00:19:02.528 { 00:19:02.528 "subsystem": "nbd", 00:19:02.528 "config": [] 00:19:02.528 }, 00:19:02.528 { 00:19:02.528 "subsystem": "scheduler", 00:19:02.528 "config": [ 00:19:02.528 { 00:19:02.528 "method": "framework_set_scheduler", 00:19:02.528 "params": { 00:19:02.528 "name": "static" 00:19:02.528 } 00:19:02.528 } 00:19:02.528 ] 00:19:02.528 }, 00:19:02.528 { 00:19:02.528 "subsystem": "nvmf", 00:19:02.528 "config": [ 00:19:02.528 { 00:19:02.528 "method": "nvmf_set_config", 00:19:02.528 "params": { 00:19:02.528 "discovery_filter": "match_any", 00:19:02.528 "admin_cmd_passthru": { 00:19:02.528 "identify_ctrlr": false 00:19:02.528 }, 00:19:02.528 "dhchap_digests": [ 00:19:02.528 "sha256", 00:19:02.528 "sha384", 00:19:02.528 "sha512" 00:19:02.528 ], 00:19:02.528 "dhchap_dhgroups": [ 00:19:02.528 "null", 00:19:02.528 "ffdhe2048", 00:19:02.528 "ffdhe3072", 00:19:02.528 "ffdhe4096", 00:19:02.528 "ffdhe6144", 00:19:02.528 "ffdhe8192" 00:19:02.528 ] 00:19:02.528 } 00:19:02.528 }, 00:19:02.528 { 00:19:02.528 "method": "nvmf_set_max_subsystems", 00:19:02.528 "params": { 00:19:02.528 "max_subsystems": 1024 00:19:02.528 } 00:19:02.528 }, 00:19:02.528 { 00:19:02.528 "method": "nvmf_set_crdt", 00:19:02.528 "params": { 00:19:02.528 "crdt1": 0, 00:19:02.528 "crdt2": 0, 00:19:02.528 "crdt3": 0 00:19:02.528 } 00:19:02.528 }, 00:19:02.528 { 00:19:02.528 "method": "nvmf_create_transport", 00:19:02.528 "params": { 00:19:02.528 "trtype": "TCP", 00:19:02.528 "max_queue_depth": 128, 00:19:02.528 "max_io_qpairs_per_ctrlr": 127, 00:19:02.528 "in_capsule_data_size": 4096, 00:19:02.528 "max_io_size": 131072, 00:19:02.528 "io_unit_size": 131072, 00:19:02.528 "max_aq_depth": 128, 00:19:02.528 "num_shared_buffers": 511, 00:19:02.528 "buf_cache_size": 4294967295, 00:19:02.528 "dif_insert_or_strip": false, 00:19:02.528 "zcopy": false, 00:19:02.528 "c2h_success": false, 00:19:02.528 "sock_priority": 0, 00:19:02.528 "abort_timeout_sec": 1, 00:19:02.528 "ack_timeout": 0, 00:19:02.528 "data_wr_pool_size": 0 00:19:02.528 } 00:19:02.528 }, 00:19:02.528 { 00:19:02.528 "method": "nvmf_create_subsystem", 00:19:02.528 "params": { 00:19:02.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.528 "allow_any_host": false, 00:19:02.528 "serial_number": "SPDK00000000000001", 00:19:02.528 "model_number": "SPDK bdev Controller", 00:19:02.528 "max_namespaces": 10, 00:19:02.528 "min_cntlid": 1, 00:19:02.528 "max_cntlid": 65519, 00:19:02.528 "ana_reporting": false 00:19:02.528 } 00:19:02.528 }, 00:19:02.528 { 00:19:02.528 "method": "nvmf_subsystem_add_host", 00:19:02.528 "params": { 00:19:02.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.528 "host": "nqn.2016-06.io.spdk:host1", 00:19:02.528 "psk": "key0" 00:19:02.528 } 00:19:02.528 }, 00:19:02.528 { 00:19:02.528 "method": "nvmf_subsystem_add_ns", 00:19:02.528 "params": { 00:19:02.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.529 "namespace": { 00:19:02.529 "nsid": 1, 00:19:02.529 "bdev_name": "malloc0", 00:19:02.529 "nguid": "8651637205B147C78B3E5A35F523A8F4", 00:19:02.529 "uuid": "86516372-05b1-47c7-8b3e-5a35f523a8f4", 00:19:02.529 "no_auto_visible": false 00:19:02.529 } 00:19:02.529 } 00:19:02.529 }, 00:19:02.529 { 00:19:02.529 "method": "nvmf_subsystem_add_listener", 00:19:02.529 "params": { 00:19:02.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.529 "listen_address": { 00:19:02.529 "trtype": "TCP", 00:19:02.529 "adrfam": "IPv4", 00:19:02.529 "traddr": "10.0.0.2", 00:19:02.529 "trsvcid": "4420" 00:19:02.529 }, 00:19:02.529 "secure_channel": true 00:19:02.529 } 00:19:02.529 } 00:19:02.529 ] 00:19:02.529 } 00:19:02.529 ] 00:19:02.529 }' 00:19:02.529 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:02.787 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:02.787 "subsystems": [ 00:19:02.787 { 00:19:02.787 "subsystem": "keyring", 00:19:02.787 "config": [ 00:19:02.787 { 00:19:02.787 "method": "keyring_file_add_key", 00:19:02.787 "params": { 00:19:02.787 "name": "key0", 00:19:02.787 "path": "/tmp/tmp.SErSMJc6E0" 00:19:02.787 } 00:19:02.787 } 00:19:02.787 ] 00:19:02.787 }, 00:19:02.787 { 00:19:02.787 "subsystem": "iobuf", 00:19:02.787 "config": [ 00:19:02.787 { 00:19:02.787 "method": "iobuf_set_options", 00:19:02.787 "params": { 00:19:02.787 "small_pool_count": 8192, 00:19:02.787 "large_pool_count": 1024, 00:19:02.787 "small_bufsize": 8192, 00:19:02.787 "large_bufsize": 135168, 00:19:02.787 "enable_numa": false 00:19:02.787 } 00:19:02.787 } 00:19:02.787 ] 00:19:02.787 }, 00:19:02.787 { 00:19:02.787 "subsystem": "sock", 00:19:02.787 "config": [ 00:19:02.787 { 00:19:02.787 "method": "sock_set_default_impl", 00:19:02.787 "params": { 00:19:02.787 "impl_name": "posix" 00:19:02.787 } 00:19:02.787 }, 00:19:02.787 { 00:19:02.787 "method": "sock_impl_set_options", 00:19:02.787 "params": { 00:19:02.787 "impl_name": "ssl", 00:19:02.787 "recv_buf_size": 4096, 00:19:02.787 "send_buf_size": 4096, 00:19:02.787 "enable_recv_pipe": true, 00:19:02.787 "enable_quickack": false, 00:19:02.787 "enable_placement_id": 0, 00:19:02.787 "enable_zerocopy_send_server": true, 00:19:02.787 "enable_zerocopy_send_client": false, 00:19:02.787 "zerocopy_threshold": 0, 00:19:02.787 "tls_version": 0, 00:19:02.787 "enable_ktls": false 00:19:02.787 } 00:19:02.787 }, 00:19:02.787 { 00:19:02.787 "method": "sock_impl_set_options", 00:19:02.787 "params": { 00:19:02.787 "impl_name": "posix", 00:19:02.787 "recv_buf_size": 2097152, 00:19:02.787 "send_buf_size": 2097152, 00:19:02.787 "enable_recv_pipe": true, 00:19:02.787 "enable_quickack": false, 00:19:02.787 "enable_placement_id": 0, 00:19:02.787 "enable_zerocopy_send_server": true, 00:19:02.787 "enable_zerocopy_send_client": false, 00:19:02.787 "zerocopy_threshold": 0, 00:19:02.787 "tls_version": 0, 00:19:02.787 "enable_ktls": false 00:19:02.787 } 00:19:02.787 } 00:19:02.787 ] 00:19:02.787 }, 00:19:02.787 { 00:19:02.787 "subsystem": "vmd", 00:19:02.788 "config": [] 00:19:02.788 }, 00:19:02.788 { 00:19:02.788 "subsystem": "accel", 00:19:02.788 "config": [ 00:19:02.788 { 00:19:02.788 "method": "accel_set_options", 00:19:02.788 "params": { 00:19:02.788 "small_cache_size": 128, 00:19:02.788 "large_cache_size": 16, 00:19:02.788 "task_count": 2048, 00:19:02.788 "sequence_count": 2048, 00:19:02.788 "buf_count": 2048 00:19:02.788 } 00:19:02.788 } 00:19:02.788 ] 00:19:02.788 }, 00:19:02.788 { 00:19:02.788 "subsystem": "bdev", 00:19:02.788 "config": [ 00:19:02.788 { 00:19:02.788 "method": "bdev_set_options", 00:19:02.788 "params": { 00:19:02.788 "bdev_io_pool_size": 65535, 00:19:02.788 "bdev_io_cache_size": 256, 00:19:02.788 "bdev_auto_examine": true, 00:19:02.788 "iobuf_small_cache_size": 128, 00:19:02.788 "iobuf_large_cache_size": 16 00:19:02.788 } 00:19:02.788 }, 00:19:02.788 { 00:19:02.788 "method": "bdev_raid_set_options", 00:19:02.788 "params": { 00:19:02.788 "process_window_size_kb": 1024, 00:19:02.788 "process_max_bandwidth_mb_sec": 0 00:19:02.788 } 00:19:02.788 }, 00:19:02.788 { 00:19:02.788 "method": "bdev_iscsi_set_options", 00:19:02.788 "params": { 00:19:02.788 "timeout_sec": 30 00:19:02.788 } 00:19:02.788 }, 00:19:02.788 { 00:19:02.788 "method": "bdev_nvme_set_options", 00:19:02.788 "params": { 00:19:02.788 "action_on_timeout": "none", 00:19:02.788 "timeout_us": 0, 00:19:02.788 "timeout_admin_us": 0, 00:19:02.788 "keep_alive_timeout_ms": 10000, 00:19:02.788 "arbitration_burst": 0, 00:19:02.788 "low_priority_weight": 0, 00:19:02.788 "medium_priority_weight": 0, 00:19:02.788 "high_priority_weight": 0, 00:19:02.788 "nvme_adminq_poll_period_us": 10000, 00:19:02.788 "nvme_ioq_poll_period_us": 0, 00:19:02.788 "io_queue_requests": 512, 00:19:02.788 "delay_cmd_submit": true, 00:19:02.788 "transport_retry_count": 4, 00:19:02.788 "bdev_retry_count": 3, 00:19:02.788 "transport_ack_timeout": 0, 00:19:02.788 "ctrlr_loss_timeout_sec": 0, 00:19:02.788 "reconnect_delay_sec": 0, 00:19:02.788 "fast_io_fail_timeout_sec": 0, 00:19:02.788 "disable_auto_failback": false, 00:19:02.788 "generate_uuids": false, 00:19:02.788 "transport_tos": 0, 00:19:02.788 "nvme_error_stat": false, 00:19:02.788 "rdma_srq_size": 0, 00:19:02.788 "io_path_stat": false, 00:19:02.788 "allow_accel_sequence": false, 00:19:02.788 "rdma_max_cq_size": 0, 00:19:02.788 "rdma_cm_event_timeout_ms": 0, 00:19:02.788 "dhchap_digests": [ 00:19:02.788 "sha256", 00:19:02.788 "sha384", 00:19:02.788 "sha512" 00:19:02.788 ], 00:19:02.788 "dhchap_dhgroups": [ 00:19:02.788 "null", 00:19:02.788 "ffdhe2048", 00:19:02.788 "ffdhe3072", 00:19:02.788 "ffdhe4096", 00:19:02.788 "ffdhe6144", 00:19:02.788 "ffdhe8192" 00:19:02.788 ] 00:19:02.788 } 00:19:02.788 }, 00:19:02.788 { 00:19:02.788 "method": "bdev_nvme_attach_controller", 00:19:02.788 "params": { 00:19:02.788 "name": "TLSTEST", 00:19:02.788 "trtype": "TCP", 00:19:02.788 "adrfam": "IPv4", 00:19:02.788 "traddr": "10.0.0.2", 00:19:02.788 "trsvcid": "4420", 00:19:02.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.788 "prchk_reftag": false, 00:19:02.788 "prchk_guard": false, 00:19:02.788 "ctrlr_loss_timeout_sec": 0, 00:19:02.788 "reconnect_delay_sec": 0, 00:19:02.788 "fast_io_fail_timeout_sec": 0, 00:19:02.788 "psk": "key0", 00:19:02.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.788 "hdgst": false, 00:19:02.788 "ddgst": false, 00:19:02.788 "multipath": "multipath" 00:19:02.788 } 00:19:02.788 }, 00:19:02.788 { 00:19:02.788 "method": "bdev_nvme_set_hotplug", 00:19:02.788 "params": { 00:19:02.788 "period_us": 100000, 00:19:02.788 "enable": false 00:19:02.788 } 00:19:02.788 }, 00:19:02.788 { 00:19:02.788 "method": "bdev_wait_for_examine" 00:19:02.788 } 00:19:02.788 ] 00:19:02.788 }, 00:19:02.788 { 00:19:02.788 "subsystem": "nbd", 00:19:02.788 "config": [] 00:19:02.788 } 00:19:02.788 ] 00:19:02.788 }' 00:19:02.788 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1045368 00:19:02.788 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1045368 ']' 00:19:02.788 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1045368 00:19:02.788 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:03.047 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.047 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1045368 00:19:03.047 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:03.047 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:03.047 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1045368' 00:19:03.047 killing process with pid 1045368 00:19:03.047 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1045368 00:19:03.047 Received shutdown signal, test time was about 10.000000 seconds 00:19:03.047 00:19:03.047 Latency(us) 00:19:03.047 [2024-11-15T11:40:43.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.047 [2024-11-15T11:40:43.391Z] =================================================================================================================== 00:19:03.047 [2024-11-15T11:40:43.391Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:03.047 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1045368 00:19:03.047 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1045076 00:19:03.047 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1045076 ']' 00:19:03.047 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1045076 00:19:03.047 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:03.047 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.047 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1045076 00:19:03.305 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:03.305 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:03.305 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1045076' 00:19:03.305 killing process with pid 1045076 00:19:03.305 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1045076 00:19:03.305 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1045076 00:19:03.564 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:03.564 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:03.564 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:03.564 "subsystems": [ 00:19:03.564 { 00:19:03.564 "subsystem": "keyring", 00:19:03.564 "config": [ 00:19:03.564 { 00:19:03.564 "method": "keyring_file_add_key", 00:19:03.564 "params": { 00:19:03.564 "name": "key0", 00:19:03.564 "path": "/tmp/tmp.SErSMJc6E0" 00:19:03.564 } 00:19:03.564 } 00:19:03.564 ] 00:19:03.564 }, 00:19:03.564 { 00:19:03.564 "subsystem": "iobuf", 00:19:03.564 "config": [ 00:19:03.564 { 00:19:03.564 "method": "iobuf_set_options", 00:19:03.564 "params": { 00:19:03.564 "small_pool_count": 8192, 00:19:03.564 "large_pool_count": 1024, 00:19:03.564 "small_bufsize": 8192, 00:19:03.564 "large_bufsize": 135168, 00:19:03.564 "enable_numa": false 00:19:03.564 } 00:19:03.564 } 00:19:03.564 ] 00:19:03.564 }, 00:19:03.564 { 00:19:03.564 "subsystem": "sock", 00:19:03.564 "config": [ 00:19:03.564 { 00:19:03.564 "method": "sock_set_default_impl", 00:19:03.564 "params": { 00:19:03.564 "impl_name": "posix" 00:19:03.564 } 00:19:03.564 }, 00:19:03.564 { 00:19:03.564 "method": "sock_impl_set_options", 00:19:03.564 "params": { 00:19:03.564 "impl_name": "ssl", 00:19:03.564 "recv_buf_size": 4096, 00:19:03.564 "send_buf_size": 4096, 00:19:03.564 "enable_recv_pipe": true, 00:19:03.564 "enable_quickack": false, 00:19:03.564 "enable_placement_id": 0, 00:19:03.564 "enable_zerocopy_send_server": true, 00:19:03.564 "enable_zerocopy_send_client": false, 00:19:03.564 "zerocopy_threshold": 0, 00:19:03.564 "tls_version": 0, 00:19:03.564 "enable_ktls": false 00:19:03.564 } 00:19:03.564 }, 00:19:03.564 { 00:19:03.564 "method": "sock_impl_set_options", 00:19:03.564 "params": { 00:19:03.564 "impl_name": "posix", 00:19:03.564 "recv_buf_size": 2097152, 00:19:03.564 "send_buf_size": 2097152, 00:19:03.564 "enable_recv_pipe": true, 00:19:03.564 "enable_quickack": false, 00:19:03.564 "enable_placement_id": 0, 00:19:03.564 "enable_zerocopy_send_server": true, 00:19:03.564 "enable_zerocopy_send_client": false, 00:19:03.564 "zerocopy_threshold": 0, 00:19:03.564 "tls_version": 0, 00:19:03.564 "enable_ktls": false 00:19:03.564 } 00:19:03.564 } 00:19:03.564 ] 00:19:03.564 }, 00:19:03.564 { 00:19:03.564 "subsystem": "vmd", 00:19:03.564 "config": [] 00:19:03.564 }, 00:19:03.564 { 00:19:03.564 "subsystem": "accel", 00:19:03.564 "config": [ 00:19:03.564 { 00:19:03.564 "method": "accel_set_options", 00:19:03.564 "params": { 00:19:03.564 "small_cache_size": 128, 00:19:03.564 "large_cache_size": 16, 00:19:03.564 "task_count": 2048, 00:19:03.564 "sequence_count": 2048, 00:19:03.564 "buf_count": 2048 00:19:03.564 } 00:19:03.564 } 00:19:03.564 ] 00:19:03.564 }, 00:19:03.564 { 00:19:03.564 "subsystem": "bdev", 00:19:03.564 "config": [ 00:19:03.564 { 00:19:03.564 "method": "bdev_set_options", 00:19:03.564 "params": { 00:19:03.564 "bdev_io_pool_size": 65535, 00:19:03.564 "bdev_io_cache_size": 256, 00:19:03.564 "bdev_auto_examine": true, 00:19:03.564 "iobuf_small_cache_size": 128, 00:19:03.564 "iobuf_large_cache_size": 16 00:19:03.564 } 00:19:03.564 }, 00:19:03.564 { 00:19:03.564 "method": "bdev_raid_set_options", 00:19:03.564 "params": { 00:19:03.564 "process_window_size_kb": 1024, 00:19:03.564 "process_max_bandwidth_mb_sec": 0 00:19:03.564 } 00:19:03.564 }, 00:19:03.564 { 00:19:03.564 "method": "bdev_iscsi_set_options", 00:19:03.564 "params": { 00:19:03.564 "timeout_sec": 30 00:19:03.564 } 00:19:03.564 }, 00:19:03.564 { 00:19:03.564 "method": "bdev_nvme_set_options", 00:19:03.564 "params": { 00:19:03.564 "action_on_timeout": "none", 00:19:03.564 "timeout_us": 0, 00:19:03.564 "timeout_admin_us": 0, 00:19:03.564 "keep_alive_timeout_ms": 10000, 00:19:03.564 "arbitration_burst": 0, 00:19:03.564 "low_priority_weight": 0, 00:19:03.564 "medium_priority_weight": 0, 00:19:03.564 "high_priority_weight": 0, 00:19:03.564 "nvme_adminq_poll_period_us": 10000, 00:19:03.564 "nvme_ioq_poll_period_us": 0, 00:19:03.564 "io_queue_requests": 0, 00:19:03.564 "delay_cmd_submit": true, 00:19:03.564 "transport_retry_count": 4, 00:19:03.564 "bdev_retry_count": 3, 00:19:03.564 "transport_ack_timeout": 0, 00:19:03.564 "ctrlr_loss_timeout_sec": 0, 00:19:03.564 "reconnect_delay_sec": 0, 00:19:03.564 "fast_io_fail_timeout_sec": 0, 00:19:03.564 "disable_auto_failback": false, 00:19:03.564 "generate_uuids": false, 00:19:03.564 "transport_tos": 0, 00:19:03.564 "nvme_error_stat": false, 00:19:03.564 "rdma_srq_size": 0, 00:19:03.564 "io_path_stat": false, 00:19:03.564 "allow_accel_sequence": false, 00:19:03.564 "rdma_max_cq_size": 0, 00:19:03.564 "rdma_cm_event_timeout_ms": 0, 00:19:03.564 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.564 "dhchap_digests": [ 00:19:03.564 "sha256", 00:19:03.564 "sha384", 00:19:03.564 "sha512" 00:19:03.564 ], 00:19:03.564 "dhchap_dhgroups": [ 00:19:03.564 "null", 00:19:03.564 "ffdhe2048", 00:19:03.564 "ffdhe3072", 00:19:03.564 "ffdhe4096", 00:19:03.564 "ffdhe6144", 00:19:03.564 "ffdhe8192" 00:19:03.564 ] 00:19:03.564 } 00:19:03.564 }, 00:19:03.564 { 00:19:03.564 "method": "bdev_nvme_set_hotplug", 00:19:03.564 "params": { 00:19:03.564 "period_us": 100000, 00:19:03.564 "enable": false 00:19:03.564 } 00:19:03.564 }, 00:19:03.564 { 00:19:03.564 "method": "bdev_malloc_create", 00:19:03.564 "params": { 00:19:03.564 "name": "malloc0", 00:19:03.564 "num_blocks": 8192, 00:19:03.565 "block_size": 4096, 00:19:03.565 "physical_block_size": 4096, 00:19:03.565 "uuid": "86516372-05b1-47c7-8b3e-5a35f523a8f4", 00:19:03.565 "optimal_io_boundary": 0, 00:19:03.565 "md_size": 0, 00:19:03.565 "dif_type": 0, 00:19:03.565 "dif_is_head_of_md": false, 00:19:03.565 "dif_pi_format": 0 00:19:03.565 } 00:19:03.565 }, 00:19:03.565 { 00:19:03.565 "method": "bdev_wait_for_examine" 00:19:03.565 } 00:19:03.565 ] 00:19:03.565 }, 00:19:03.565 { 00:19:03.565 "subsystem": "nbd", 00:19:03.565 "config": [] 00:19:03.565 }, 00:19:03.565 { 00:19:03.565 "subsystem": "scheduler", 00:19:03.565 "config": [ 00:19:03.565 { 00:19:03.565 "method": "framework_set_scheduler", 00:19:03.565 "params": { 00:19:03.565 "name": "static" 00:19:03.565 } 00:19:03.565 } 00:19:03.565 ] 00:19:03.565 }, 00:19:03.565 { 00:19:03.565 "subsystem": "nvmf", 00:19:03.565 "config": [ 00:19:03.565 { 00:19:03.565 "method": "nvmf_set_config", 00:19:03.565 "params": { 00:19:03.565 "discovery_filter": "match_any", 00:19:03.565 "admin_cmd_passthru": { 00:19:03.565 "identify_ctrlr": false 00:19:03.565 }, 00:19:03.565 "dhchap_digests": [ 00:19:03.565 "sha256", 00:19:03.565 "sha384", 00:19:03.565 "sha512" 00:19:03.565 ], 00:19:03.565 "dhchap_dhgroups": [ 00:19:03.565 "null", 00:19:03.565 "ffdhe2048", 00:19:03.565 "ffdhe3072", 00:19:03.565 "ffdhe4096", 00:19:03.565 "ffdhe6144", 00:19:03.565 "ffdhe8192" 00:19:03.565 ] 00:19:03.565 } 00:19:03.565 }, 00:19:03.565 { 00:19:03.565 "method": "nvmf_set_max_subsystems", 00:19:03.565 "params": { 00:19:03.565 "max_subsystems": 1024 00:19:03.565 } 00:19:03.565 }, 00:19:03.565 { 00:19:03.565 "method": "nvmf_set_crdt", 00:19:03.565 "params": { 00:19:03.565 "crdt1": 0, 00:19:03.565 "crdt2": 0, 00:19:03.565 "crdt3": 0 00:19:03.565 } 00:19:03.565 }, 00:19:03.565 { 00:19:03.565 "method": "nvmf_create_transport", 00:19:03.565 "params": { 00:19:03.565 "trtype": "TCP", 00:19:03.565 "max_queue_depth": 128, 00:19:03.565 "max_io_qpairs_per_ctrlr": 127, 00:19:03.565 "in_capsule_data_size": 4096, 00:19:03.565 "max_io_size": 131072, 00:19:03.565 "io_unit_size": 131072, 00:19:03.565 "max_aq_depth": 128, 00:19:03.565 "num_shared_buffers": 511, 00:19:03.565 "buf_cache_size": 4294967295, 00:19:03.565 "dif_insert_or_strip": false, 00:19:03.565 "zcopy": false, 00:19:03.565 "c2h_success": false, 00:19:03.565 "sock_priority": 0, 00:19:03.565 "abort_timeout_sec": 1, 00:19:03.565 "ack_timeout": 0, 00:19:03.565 "data_wr_pool_size": 0 00:19:03.565 } 00:19:03.565 }, 00:19:03.565 { 00:19:03.565 "method": "nvmf_create_subsystem", 00:19:03.565 "params": { 00:19:03.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.565 "allow_any_host": false, 00:19:03.565 "serial_number": "SPDK00000000000001", 00:19:03.565 "model_number": "SPDK bdev Controller", 00:19:03.565 "max_namespaces": 10, 00:19:03.565 "min_cntlid": 1, 00:19:03.565 "max_cntlid": 65519, 00:19:03.565 "ana_reporting": false 00:19:03.565 } 00:19:03.565 }, 00:19:03.565 { 00:19:03.565 "method": "nvmf_subsystem_add_host", 00:19:03.565 "params": { 00:19:03.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.565 "host": "nqn.2016-06.io.spdk:host1", 00:19:03.565 "psk": "key0" 00:19:03.565 } 00:19:03.565 }, 00:19:03.565 { 00:19:03.565 "method": "nvmf_subsystem_add_ns", 00:19:03.565 "params": { 00:19:03.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.565 "namespace": { 00:19:03.565 "nsid": 1, 00:19:03.565 "bdev_name": "malloc0", 00:19:03.565 "nguid": "8651637205B147C78B3E5A35F523A8F4", 00:19:03.565 "uuid": "86516372-05b1-47c7-8b3e-5a35f523a8f4", 00:19:03.565 "no_auto_visible": false 00:19:03.565 } 00:19:03.565 } 00:19:03.565 }, 00:19:03.565 { 00:19:03.565 "method": "nvmf_subsystem_add_listener", 00:19:03.565 "params": { 00:19:03.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.565 "listen_address": { 00:19:03.565 "trtype": "TCP", 00:19:03.565 "adrfam": "IPv4", 00:19:03.565 "traddr": "10.0.0.2", 00:19:03.565 "trsvcid": "4420" 00:19:03.565 }, 00:19:03.565 "secure_channel": true 00:19:03.565 } 00:19:03.565 } 00:19:03.565 ] 00:19:03.565 } 00:19:03.565 ] 00:19:03.565 }' 00:19:03.565 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.565 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1045651 00:19:03.565 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:03.565 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1045651 00:19:03.565 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1045651 ']' 00:19:03.565 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.565 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.565 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.565 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.565 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.565 [2024-11-15 12:40:43.715765] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:19:03.565 [2024-11-15 12:40:43.715842] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.565 [2024-11-15 12:40:43.786930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.565 [2024-11-15 12:40:43.846830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.565 [2024-11-15 12:40:43.846884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.565 [2024-11-15 12:40:43.846913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.565 [2024-11-15 12:40:43.846925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.565 [2024-11-15 12:40:43.846936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.565 [2024-11-15 12:40:43.847599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.824 [2024-11-15 12:40:44.090749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.824 [2024-11-15 12:40:44.122783] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.824 [2024-11-15 12:40:44.123066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.758 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.758 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:04.758 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:04.758 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:04.758 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.758 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.758 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1045799 00:19:04.758 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1045799 /var/tmp/bdevperf.sock 00:19:04.758 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1045799 ']' 00:19:04.758 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.758 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:04.758 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.758 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:04.758 "subsystems": [ 00:19:04.758 { 00:19:04.758 "subsystem": "keyring", 00:19:04.758 "config": [ 00:19:04.758 { 00:19:04.758 "method": "keyring_file_add_key", 00:19:04.758 "params": { 00:19:04.758 "name": "key0", 00:19:04.758 "path": "/tmp/tmp.SErSMJc6E0" 00:19:04.758 } 00:19:04.758 } 00:19:04.758 ] 00:19:04.758 }, 00:19:04.758 { 00:19:04.758 "subsystem": "iobuf", 00:19:04.758 "config": [ 00:19:04.758 { 00:19:04.758 "method": "iobuf_set_options", 00:19:04.758 "params": { 00:19:04.758 "small_pool_count": 8192, 00:19:04.758 "large_pool_count": 1024, 00:19:04.758 "small_bufsize": 8192, 00:19:04.758 "large_bufsize": 135168, 00:19:04.758 "enable_numa": false 00:19:04.758 } 00:19:04.758 } 00:19:04.758 ] 00:19:04.758 }, 00:19:04.758 { 00:19:04.758 "subsystem": "sock", 00:19:04.758 "config": [ 00:19:04.758 { 00:19:04.758 "method": "sock_set_default_impl", 00:19:04.758 "params": { 00:19:04.758 "impl_name": "posix" 00:19:04.758 } 00:19:04.758 }, 00:19:04.758 { 00:19:04.758 "method": "sock_impl_set_options", 00:19:04.758 "params": { 00:19:04.758 "impl_name": "ssl", 00:19:04.758 "recv_buf_size": 4096, 00:19:04.758 "send_buf_size": 4096, 00:19:04.758 "enable_recv_pipe": true, 00:19:04.758 "enable_quickack": false, 00:19:04.758 "enable_placement_id": 0, 00:19:04.758 "enable_zerocopy_send_server": true, 00:19:04.758 "enable_zerocopy_send_client": false, 00:19:04.758 "zerocopy_threshold": 0, 00:19:04.758 "tls_version": 0, 00:19:04.758 "enable_ktls": false 00:19:04.758 } 00:19:04.758 }, 00:19:04.758 { 00:19:04.758 "method": "sock_impl_set_options", 00:19:04.758 "params": { 00:19:04.758 "impl_name": "posix", 00:19:04.758 "recv_buf_size": 2097152, 00:19:04.758 "send_buf_size": 2097152, 00:19:04.758 "enable_recv_pipe": true, 00:19:04.758 "enable_quickack": false, 00:19:04.758 "enable_placement_id": 0, 00:19:04.758 "enable_zerocopy_send_server": true, 00:19:04.758 "enable_zerocopy_send_client": false, 00:19:04.758 "zerocopy_threshold": 0, 00:19:04.758 "tls_version": 0, 00:19:04.758 "enable_ktls": false 00:19:04.758 } 00:19:04.758 } 00:19:04.758 ] 00:19:04.758 }, 00:19:04.758 { 00:19:04.758 "subsystem": "vmd", 00:19:04.758 "config": [] 00:19:04.758 }, 00:19:04.758 { 00:19:04.758 "subsystem": "accel", 00:19:04.758 "config": [ 00:19:04.758 { 00:19:04.758 "method": "accel_set_options", 00:19:04.758 "params": { 00:19:04.758 "small_cache_size": 128, 00:19:04.758 "large_cache_size": 16, 00:19:04.758 "task_count": 2048, 00:19:04.758 "sequence_count": 2048, 00:19:04.758 "buf_count": 2048 00:19:04.758 } 00:19:04.758 } 00:19:04.758 ] 00:19:04.758 }, 00:19:04.758 { 00:19:04.758 "subsystem": "bdev", 00:19:04.758 "config": [ 00:19:04.758 { 00:19:04.758 "method": "bdev_set_options", 00:19:04.758 "params": { 00:19:04.758 "bdev_io_pool_size": 65535, 00:19:04.758 "bdev_io_cache_size": 256, 00:19:04.758 "bdev_auto_examine": true, 00:19:04.758 "iobuf_small_cache_size": 128, 00:19:04.758 "iobuf_large_cache_size": 16 00:19:04.758 } 00:19:04.758 }, 00:19:04.758 { 00:19:04.758 "method": "bdev_raid_set_options", 00:19:04.758 "params": { 00:19:04.758 "process_window_size_kb": 1024, 00:19:04.758 "process_max_bandwidth_mb_sec": 0 00:19:04.758 } 00:19:04.758 }, 00:19:04.758 { 00:19:04.758 "method": "bdev_iscsi_set_options", 00:19:04.758 "params": { 00:19:04.758 "timeout_sec": 30 00:19:04.758 } 00:19:04.758 }, 00:19:04.758 { 00:19:04.759 "method": "bdev_nvme_set_options", 00:19:04.759 "params": { 00:19:04.759 "action_on_timeout": "none", 00:19:04.759 "timeout_us": 0, 00:19:04.759 "timeout_admin_us": 0, 00:19:04.759 "keep_alive_timeout_ms": 10000, 00:19:04.759 "arbitration_burst": 0, 00:19:04.759 "low_priority_weight": 0, 00:19:04.759 "medium_priority_weight": 0, 00:19:04.759 "high_priority_weight": 0, 00:19:04.759 "nvme_adminq_poll_period_us": 10000, 00:19:04.759 "nvme_ioq_poll_period_us": 0, 00:19:04.759 "io_queue_requests": 512, 00:19:04.759 "delay_cmd_submit": true, 00:19:04.759 "transport_retry_count": 4, 00:19:04.759 "bdev_retry_count": 3, 00:19:04.759 "transport_ack_timeout": 0, 00:19:04.759 "ctrlr_loss_timeout_sec": 0, 00:19:04.759 "reconnect_delay_sec": 0, 00:19:04.759 "fast_io_fail_timeout_sec": 0, 00:19:04.759 "disable_auto_failback": false, 00:19:04.759 "generate_uuids": false, 00:19:04.759 "transport_tos": 0, 00:19:04.759 "nvme_error_stat": false, 00:19:04.759 "rdma_srq_size": 0, 00:19:04.759 "io_path_stat": false, 00:19:04.759 "allow_accel_sequence": false, 00:19:04.759 "rdma_max_cq_size": 0, 00:19:04.759 "rdma_cm_event_timeout_ms": 0, 00:19:04.759 "dhchap_digests": [ 00:19:04.759 "sha256", 00:19:04.759 "sha384", 00:19:04.759 "sha512" 00:19:04.759 ], 00:19:04.759 "dhchap_dhgroups": [ 00:19:04.759 "null", 00:19:04.759 "ffdhe2048", 00:19:04.759 "ffdhe3072", 00:19:04.759 "ffdhe4096", 00:19:04.759 "ffdhe6144", 00:19:04.759 "ffdhe8192" 00:19:04.759 ] 00:19:04.759 } 00:19:04.759 }, 00:19:04.759 { 00:19:04.759 "method": "bdev_nvme_attach_controller", 00:19:04.759 "params": { 00:19:04.759 "name": "TLSTEST", 00:19:04.759 "trtype": "TCP", 00:19:04.759 "adrfam": "IPv4", 00:19:04.759 "traddr": "10.0.0.2", 00:19:04.759 "trsvcid": "4420", 00:19:04.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.759 "prchk_reftag": false, 00:19:04.759 "prchk_guard": false, 00:19:04.759 "ctrlr_loss_timeout_sec": 0, 00:19:04.759 "reconnect_delay_sec": 0, 00:19:04.759 "fast_io_fail_timeout_sec": 0, 00:19:04.759 "psk": "key0", 00:19:04.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.759 "hdgst": false, 00:19:04.759 "ddgst": false, 00:19:04.759 "multipath": "multipath" 00:19:04.759 } 00:19:04.759 }, 00:19:04.759 { 00:19:04.759 "method": "bdev_nvme_set_hotplug", 00:19:04.759 "params": { 00:19:04.759 "period_us": 100000, 00:19:04.759 "enable": false 00:19:04.759 } 00:19:04.759 }, 00:19:04.759 { 00:19:04.759 "method": "bdev_wait_for_examine" 00:19:04.759 } 00:19:04.759 ] 00:19:04.759 }, 00:19:04.759 { 00:19:04.759 "subsystem": "nbd", 00:19:04.759 "config": [] 00:19:04.759 } 00:19:04.759 ] 00:19:04.759 }' 00:19:04.759 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.759 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.759 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.759 [2024-11-15 12:40:44.841939] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:19:04.759 [2024-11-15 12:40:44.842027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045799 ] 00:19:04.759 [2024-11-15 12:40:44.906520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.759 [2024-11-15 12:40:44.962686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.020 [2024-11-15 12:40:45.140208] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.020 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.020 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:05.020 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:05.020 Running I/O for 10 seconds... 00:19:07.333 3529.00 IOPS, 13.79 MiB/s [2024-11-15T11:40:48.614Z] 3545.00 IOPS, 13.85 MiB/s [2024-11-15T11:40:49.549Z] 3578.33 IOPS, 13.98 MiB/s [2024-11-15T11:40:50.483Z] 3582.75 IOPS, 14.00 MiB/s [2024-11-15T11:40:51.561Z] 3585.20 IOPS, 14.00 MiB/s [2024-11-15T11:40:52.535Z] 3599.00 IOPS, 14.06 MiB/s [2024-11-15T11:40:53.467Z] 3599.71 IOPS, 14.06 MiB/s [2024-11-15T11:40:54.399Z] 3611.12 IOPS, 14.11 MiB/s [2024-11-15T11:40:55.769Z] 3615.22 IOPS, 14.12 MiB/s [2024-11-15T11:40:55.769Z] 3599.00 IOPS, 14.06 MiB/s 00:19:15.425 Latency(us) 00:19:15.425 [2024-11-15T11:40:55.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.425 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:15.425 Verification LBA range: start 0x0 length 0x2000 00:19:15.425 TLSTESTn1 : 10.02 3604.44 14.08 0.00 0.00 35450.84 7427.41 28544.57 00:19:15.425 [2024-11-15T11:40:55.769Z] =================================================================================================================== 00:19:15.425 [2024-11-15T11:40:55.769Z] Total : 3604.44 14.08 0.00 0.00 35450.84 7427.41 28544.57 00:19:15.425 { 00:19:15.425 "results": [ 00:19:15.425 { 00:19:15.425 "job": "TLSTESTn1", 00:19:15.425 "core_mask": "0x4", 00:19:15.425 "workload": "verify", 00:19:15.425 "status": "finished", 00:19:15.425 "verify_range": { 00:19:15.425 "start": 0, 00:19:15.425 "length": 8192 00:19:15.425 }, 00:19:15.425 "queue_depth": 128, 00:19:15.425 "io_size": 4096, 00:19:15.425 "runtime": 10.019858, 00:19:15.425 "iops": 3604.442298483671, 00:19:15.425 "mibps": 14.07985272845184, 00:19:15.425 "io_failed": 0, 00:19:15.425 "io_timeout": 0, 00:19:15.425 "avg_latency_us": 35450.84261593302, 00:19:15.425 "min_latency_us": 7427.413333333333, 00:19:15.425 "max_latency_us": 28544.568888888887 00:19:15.425 } 00:19:15.425 ], 00:19:15.425 "core_count": 1 00:19:15.425 } 00:19:15.425 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:15.425 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1045799 00:19:15.425 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1045799 ']' 00:19:15.425 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1045799 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1045799 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1045799' 00:19:15.426 killing process with pid 1045799 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1045799 00:19:15.426 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.426 00:19:15.426 Latency(us) 00:19:15.426 [2024-11-15T11:40:55.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.426 [2024-11-15T11:40:55.770Z] =================================================================================================================== 00:19:15.426 [2024-11-15T11:40:55.770Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1045799 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1045651 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1045651 ']' 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1045651 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1045651 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1045651' 00:19:15.426 killing process with pid 1045651 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1045651 00:19:15.426 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1045651 00:19:15.684 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:15.684 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:15.684 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:15.684 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.684 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1047128 00:19:15.684 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:15.684 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1047128 00:19:15.684 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1047128 ']' 00:19:15.684 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.684 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.684 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.684 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.684 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.684 [2024-11-15 12:40:56.014856] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:19:15.684 [2024-11-15 12:40:56.014946] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.943 [2024-11-15 12:40:56.085112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.943 [2024-11-15 12:40:56.140122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.943 [2024-11-15 12:40:56.140178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.943 [2024-11-15 12:40:56.140205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.943 [2024-11-15 12:40:56.140217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.943 [2024-11-15 12:40:56.140226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.943 [2024-11-15 12:40:56.140826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.943 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.943 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:15.943 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:15.943 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:15.943 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.943 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.943 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.SErSMJc6E0 00:19:15.943 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.SErSMJc6E0 00:19:15.943 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:16.201 [2024-11-15 12:40:56.543429] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.459 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:16.717 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:16.975 [2024-11-15 12:40:57.157071] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:16.975 [2024-11-15 12:40:57.157297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.975 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:17.233 malloc0 00:19:17.233 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:17.491 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.SErSMJc6E0 00:19:17.749 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:18.007 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1047416 00:19:18.007 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:18.007 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:18.007 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1047416 /var/tmp/bdevperf.sock 00:19:18.007 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1047416 ']' 00:19:18.007 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.007 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.007 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.007 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.007 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.007 [2024-11-15 12:40:58.313521] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:19:18.007 [2024-11-15 12:40:58.313602] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047416 ] 00:19:18.265 [2024-11-15 12:40:58.379526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.265 [2024-11-15 12:40:58.435998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.265 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.265 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:18.265 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SErSMJc6E0 00:19:18.522 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:18.780 [2024-11-15 12:40:59.077979] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:19.038 nvme0n1 00:19:19.038 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:19.038 Running I/O for 1 seconds... 00:19:19.971 3563.00 IOPS, 13.92 MiB/s 00:19:19.971 Latency(us) 00:19:19.971 [2024-11-15T11:41:00.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.971 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:19.971 Verification LBA range: start 0x0 length 0x2000 00:19:19.971 nvme0n1 : 1.03 3582.44 13.99 0.00 0.00 35259.77 8446.86 28932.93 00:19:19.971 [2024-11-15T11:41:00.315Z] =================================================================================================================== 00:19:19.971 [2024-11-15T11:41:00.315Z] Total : 3582.44 13.99 0.00 0.00 35259.77 8446.86 28932.93 00:19:19.971 { 00:19:19.971 "results": [ 00:19:19.971 { 00:19:19.971 "job": "nvme0n1", 00:19:19.971 "core_mask": "0x2", 00:19:19.971 "workload": "verify", 00:19:19.971 "status": "finished", 00:19:19.971 "verify_range": { 00:19:19.971 "start": 0, 00:19:19.971 "length": 8192 00:19:19.971 }, 00:19:19.971 "queue_depth": 128, 00:19:19.971 "io_size": 4096, 00:19:19.971 "runtime": 1.030583, 00:19:19.971 "iops": 3582.43828978355, 00:19:19.971 "mibps": 13.993899569466992, 00:19:19.971 "io_failed": 0, 00:19:19.971 "io_timeout": 0, 00:19:19.971 "avg_latency_us": 35259.77360138036, 00:19:19.971 "min_latency_us": 8446.862222222222, 00:19:19.971 "max_latency_us": 28932.93037037037 00:19:19.971 } 00:19:19.971 ], 00:19:19.971 "core_count": 1 00:19:19.971 } 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1047416 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1047416 ']' 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1047416 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1047416 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1047416' 00:19:20.229 killing process with pid 1047416 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1047416 00:19:20.229 Received shutdown signal, test time was about 1.000000 seconds 00:19:20.229 00:19:20.229 Latency(us) 00:19:20.229 [2024-11-15T11:41:00.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.229 [2024-11-15T11:41:00.573Z] =================================================================================================================== 00:19:20.229 [2024-11-15T11:41:00.573Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1047416 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1047128 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1047128 ']' 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1047128 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:20.229 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1047128 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1047128' 00:19:20.487 killing process with pid 1047128 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1047128 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1047128 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1047700 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1047700 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1047700 ']' 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.487 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.745 [2024-11-15 12:41:00.867015] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:19:20.745 [2024-11-15 12:41:00.867126] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.745 [2024-11-15 12:41:00.942243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.745 [2024-11-15 12:41:01.000410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.745 [2024-11-15 12:41:01.000454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.745 [2024-11-15 12:41:01.000481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.745 [2024-11-15 12:41:01.000492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.745 [2024-11-15 12:41:01.000501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.745 [2024-11-15 12:41:01.001140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.003 [2024-11-15 12:41:01.140630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.003 malloc0 00:19:21.003 [2024-11-15 12:41:01.172017] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:21.003 [2024-11-15 12:41:01.172282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1047837 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1047837 /var/tmp/bdevperf.sock 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1047837 ']' 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.003 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.003 [2024-11-15 12:41:01.242184] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:19:21.003 [2024-11-15 12:41:01.242255] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047837 ] 00:19:21.003 [2024-11-15 12:41:01.306010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.262 [2024-11-15 12:41:01.364105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.262 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.262 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:21.262 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SErSMJc6E0 00:19:21.520 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:21.778 [2024-11-15 12:41:01.993659] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.778 nvme0n1 00:19:21.778 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:22.035 Running I/O for 1 seconds... 00:19:22.969 3468.00 IOPS, 13.55 MiB/s 00:19:22.969 Latency(us) 00:19:22.969 [2024-11-15T11:41:03.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.969 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:22.969 Verification LBA range: start 0x0 length 0x2000 00:19:22.969 nvme0n1 : 1.02 3523.18 13.76 0.00 0.00 35990.44 5776.88 27185.30 00:19:22.969 [2024-11-15T11:41:03.313Z] =================================================================================================================== 00:19:22.969 [2024-11-15T11:41:03.313Z] Total : 3523.18 13.76 0.00 0.00 35990.44 5776.88 27185.30 00:19:22.969 { 00:19:22.969 "results": [ 00:19:22.969 { 00:19:22.969 "job": "nvme0n1", 00:19:22.969 "core_mask": "0x2", 00:19:22.969 "workload": "verify", 00:19:22.969 "status": "finished", 00:19:22.969 "verify_range": { 00:19:22.969 "start": 0, 00:19:22.969 "length": 8192 00:19:22.969 }, 00:19:22.969 "queue_depth": 128, 00:19:22.969 "io_size": 4096, 00:19:22.969 "runtime": 1.020668, 00:19:22.969 "iops": 3523.1828567173657, 00:19:22.969 "mibps": 13.76243303405221, 00:19:22.969 "io_failed": 0, 00:19:22.969 "io_timeout": 0, 00:19:22.969 "avg_latency_us": 35990.44316236147, 00:19:22.969 "min_latency_us": 5776.877037037037, 00:19:22.969 "max_latency_us": 27185.303703703703 00:19:22.969 } 00:19:22.969 ], 00:19:22.969 "core_count": 1 00:19:22.969 } 00:19:22.969 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:22.969 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.969 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.227 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.227 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:23.227 "subsystems": [ 00:19:23.227 { 00:19:23.227 "subsystem": "keyring", 00:19:23.227 "config": [ 00:19:23.227 { 00:19:23.227 "method": "keyring_file_add_key", 00:19:23.227 "params": { 00:19:23.227 "name": "key0", 00:19:23.227 "path": "/tmp/tmp.SErSMJc6E0" 00:19:23.227 } 00:19:23.227 } 00:19:23.227 ] 00:19:23.227 }, 00:19:23.227 { 00:19:23.227 "subsystem": "iobuf", 00:19:23.227 "config": [ 00:19:23.227 { 00:19:23.227 "method": "iobuf_set_options", 00:19:23.227 "params": { 00:19:23.227 "small_pool_count": 8192, 00:19:23.227 "large_pool_count": 1024, 00:19:23.227 "small_bufsize": 8192, 00:19:23.227 "large_bufsize": 135168, 00:19:23.227 "enable_numa": false 00:19:23.227 } 00:19:23.227 } 00:19:23.227 ] 00:19:23.227 }, 00:19:23.227 { 00:19:23.227 "subsystem": "sock", 00:19:23.227 "config": [ 00:19:23.227 { 00:19:23.227 "method": "sock_set_default_impl", 00:19:23.227 "params": { 00:19:23.227 "impl_name": "posix" 00:19:23.227 } 00:19:23.227 }, 00:19:23.227 { 00:19:23.227 "method": "sock_impl_set_options", 00:19:23.227 "params": { 00:19:23.227 "impl_name": "ssl", 00:19:23.227 "recv_buf_size": 4096, 00:19:23.227 "send_buf_size": 4096, 00:19:23.227 "enable_recv_pipe": true, 00:19:23.227 "enable_quickack": false, 00:19:23.227 "enable_placement_id": 0, 00:19:23.227 "enable_zerocopy_send_server": true, 00:19:23.227 "enable_zerocopy_send_client": false, 00:19:23.227 "zerocopy_threshold": 0, 00:19:23.227 "tls_version": 0, 00:19:23.227 "enable_ktls": false 00:19:23.227 } 00:19:23.227 }, 00:19:23.227 { 00:19:23.227 "method": "sock_impl_set_options", 00:19:23.227 "params": { 00:19:23.227 "impl_name": "posix", 00:19:23.227 "recv_buf_size": 2097152, 00:19:23.227 "send_buf_size": 2097152, 00:19:23.227 "enable_recv_pipe": true, 00:19:23.227 "enable_quickack": false, 00:19:23.227 "enable_placement_id": 0, 00:19:23.227 "enable_zerocopy_send_server": true, 00:19:23.227 "enable_zerocopy_send_client": false, 00:19:23.227 "zerocopy_threshold": 0, 00:19:23.227 "tls_version": 0, 00:19:23.227 "enable_ktls": false 00:19:23.227 } 00:19:23.227 } 00:19:23.227 ] 00:19:23.227 }, 00:19:23.227 { 00:19:23.227 "subsystem": "vmd", 00:19:23.227 "config": [] 00:19:23.227 }, 00:19:23.227 { 00:19:23.227 "subsystem": "accel", 00:19:23.227 "config": [ 00:19:23.227 { 00:19:23.227 "method": "accel_set_options", 00:19:23.227 "params": { 00:19:23.227 "small_cache_size": 128, 00:19:23.227 "large_cache_size": 16, 00:19:23.227 "task_count": 2048, 00:19:23.227 "sequence_count": 2048, 00:19:23.227 "buf_count": 2048 00:19:23.227 } 00:19:23.227 } 00:19:23.227 ] 00:19:23.227 }, 00:19:23.227 { 00:19:23.227 "subsystem": "bdev", 00:19:23.227 "config": [ 00:19:23.227 { 00:19:23.227 "method": "bdev_set_options", 00:19:23.227 "params": { 00:19:23.227 "bdev_io_pool_size": 65535, 00:19:23.227 "bdev_io_cache_size": 256, 00:19:23.227 "bdev_auto_examine": true, 00:19:23.227 "iobuf_small_cache_size": 128, 00:19:23.227 "iobuf_large_cache_size": 16 00:19:23.227 } 00:19:23.227 }, 00:19:23.227 { 00:19:23.227 "method": "bdev_raid_set_options", 00:19:23.227 "params": { 00:19:23.227 "process_window_size_kb": 1024, 00:19:23.227 "process_max_bandwidth_mb_sec": 0 00:19:23.227 } 00:19:23.227 }, 00:19:23.227 { 00:19:23.227 "method": "bdev_iscsi_set_options", 00:19:23.227 "params": { 00:19:23.227 "timeout_sec": 30 00:19:23.227 } 00:19:23.227 }, 00:19:23.227 { 00:19:23.227 "method": "bdev_nvme_set_options", 00:19:23.227 "params": { 00:19:23.227 "action_on_timeout": "none", 00:19:23.227 "timeout_us": 0, 00:19:23.227 "timeout_admin_us": 0, 00:19:23.227 "keep_alive_timeout_ms": 10000, 00:19:23.227 "arbitration_burst": 0, 00:19:23.227 "low_priority_weight": 0, 00:19:23.227 "medium_priority_weight": 0, 00:19:23.227 "high_priority_weight": 0, 00:19:23.227 "nvme_adminq_poll_period_us": 10000, 00:19:23.227 "nvme_ioq_poll_period_us": 0, 00:19:23.227 "io_queue_requests": 0, 00:19:23.227 "delay_cmd_submit": true, 00:19:23.227 "transport_retry_count": 4, 00:19:23.227 "bdev_retry_count": 3, 00:19:23.227 "transport_ack_timeout": 0, 00:19:23.227 "ctrlr_loss_timeout_sec": 0, 00:19:23.227 "reconnect_delay_sec": 0, 00:19:23.227 "fast_io_fail_timeout_sec": 0, 00:19:23.227 "disable_auto_failback": false, 00:19:23.227 "generate_uuids": false, 00:19:23.227 "transport_tos": 0, 00:19:23.228 "nvme_error_stat": false, 00:19:23.228 "rdma_srq_size": 0, 00:19:23.228 "io_path_stat": false, 00:19:23.228 "allow_accel_sequence": false, 00:19:23.228 "rdma_max_cq_size": 0, 00:19:23.228 "rdma_cm_event_timeout_ms": 0, 00:19:23.228 "dhchap_digests": [ 00:19:23.228 "sha256", 00:19:23.228 "sha384", 00:19:23.228 "sha512" 00:19:23.228 ], 00:19:23.228 "dhchap_dhgroups": [ 00:19:23.228 "null", 00:19:23.228 "ffdhe2048", 00:19:23.228 "ffdhe3072", 00:19:23.228 "ffdhe4096", 00:19:23.228 "ffdhe6144", 00:19:23.228 "ffdhe8192" 00:19:23.228 ] 00:19:23.228 } 00:19:23.228 }, 00:19:23.228 { 00:19:23.228 "method": "bdev_nvme_set_hotplug", 00:19:23.228 "params": { 00:19:23.228 "period_us": 100000, 00:19:23.228 "enable": false 00:19:23.228 } 00:19:23.228 }, 00:19:23.228 { 00:19:23.228 "method": "bdev_malloc_create", 00:19:23.228 "params": { 00:19:23.228 "name": "malloc0", 00:19:23.228 "num_blocks": 8192, 00:19:23.228 "block_size": 4096, 00:19:23.228 "physical_block_size": 4096, 00:19:23.228 "uuid": "5955dd9f-563b-4165-bc68-39c6f6b54c47", 00:19:23.228 "optimal_io_boundary": 0, 00:19:23.228 "md_size": 0, 00:19:23.228 "dif_type": 0, 00:19:23.228 "dif_is_head_of_md": false, 00:19:23.228 "dif_pi_format": 0 00:19:23.228 } 00:19:23.228 }, 00:19:23.228 { 00:19:23.228 "method": "bdev_wait_for_examine" 00:19:23.228 } 00:19:23.228 ] 00:19:23.228 }, 00:19:23.228 { 00:19:23.228 "subsystem": "nbd", 00:19:23.228 "config": [] 00:19:23.228 }, 00:19:23.228 { 00:19:23.228 "subsystem": "scheduler", 00:19:23.228 "config": [ 00:19:23.228 { 00:19:23.228 "method": "framework_set_scheduler", 00:19:23.228 "params": { 00:19:23.228 "name": "static" 00:19:23.228 } 00:19:23.228 } 00:19:23.228 ] 00:19:23.228 }, 00:19:23.228 { 00:19:23.228 "subsystem": "nvmf", 00:19:23.228 "config": [ 00:19:23.228 { 00:19:23.228 "method": "nvmf_set_config", 00:19:23.228 "params": { 00:19:23.228 "discovery_filter": "match_any", 00:19:23.228 "admin_cmd_passthru": { 00:19:23.228 "identify_ctrlr": false 00:19:23.228 }, 00:19:23.228 "dhchap_digests": [ 00:19:23.228 "sha256", 00:19:23.228 "sha384", 00:19:23.228 "sha512" 00:19:23.228 ], 00:19:23.228 "dhchap_dhgroups": [ 00:19:23.228 "null", 00:19:23.228 "ffdhe2048", 00:19:23.228 "ffdhe3072", 00:19:23.228 "ffdhe4096", 00:19:23.228 "ffdhe6144", 00:19:23.228 "ffdhe8192" 00:19:23.228 ] 00:19:23.228 } 00:19:23.228 }, 00:19:23.228 { 00:19:23.228 "method": "nvmf_set_max_subsystems", 00:19:23.228 "params": { 00:19:23.228 "max_subsystems": 1024 00:19:23.228 } 00:19:23.228 }, 00:19:23.228 { 00:19:23.228 "method": "nvmf_set_crdt", 00:19:23.228 "params": { 00:19:23.228 "crdt1": 0, 00:19:23.228 "crdt2": 0, 00:19:23.228 "crdt3": 0 00:19:23.228 } 00:19:23.228 }, 00:19:23.228 { 00:19:23.228 "method": "nvmf_create_transport", 00:19:23.228 "params": { 00:19:23.228 "trtype": "TCP", 00:19:23.228 "max_queue_depth": 128, 00:19:23.228 "max_io_qpairs_per_ctrlr": 127, 00:19:23.228 "in_capsule_data_size": 4096, 00:19:23.228 "max_io_size": 131072, 00:19:23.228 "io_unit_size": 131072, 00:19:23.228 "max_aq_depth": 128, 00:19:23.228 "num_shared_buffers": 511, 00:19:23.228 "buf_cache_size": 4294967295, 00:19:23.228 "dif_insert_or_strip": false, 00:19:23.228 "zcopy": false, 00:19:23.228 "c2h_success": false, 00:19:23.228 "sock_priority": 0, 00:19:23.228 "abort_timeout_sec": 1, 00:19:23.228 "ack_timeout": 0, 00:19:23.228 "data_wr_pool_size": 0 00:19:23.228 } 00:19:23.228 }, 00:19:23.228 { 00:19:23.228 "method": "nvmf_create_subsystem", 00:19:23.228 "params": { 00:19:23.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.228 "allow_any_host": false, 00:19:23.228 "serial_number": "00000000000000000000", 00:19:23.228 "model_number": "SPDK bdev Controller", 00:19:23.228 "max_namespaces": 32, 00:19:23.228 "min_cntlid": 1, 00:19:23.228 "max_cntlid": 65519, 00:19:23.228 "ana_reporting": false 00:19:23.228 } 00:19:23.228 }, 00:19:23.228 { 00:19:23.228 "method": "nvmf_subsystem_add_host", 00:19:23.228 "params": { 00:19:23.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.228 "host": "nqn.2016-06.io.spdk:host1", 00:19:23.228 "psk": "key0" 00:19:23.228 } 00:19:23.228 }, 00:19:23.228 { 00:19:23.228 "method": "nvmf_subsystem_add_ns", 00:19:23.228 "params": { 00:19:23.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.228 "namespace": { 00:19:23.228 "nsid": 1, 00:19:23.228 "bdev_name": "malloc0", 00:19:23.228 "nguid": "5955DD9F563B4165BC6839C6F6B54C47", 00:19:23.228 "uuid": "5955dd9f-563b-4165-bc68-39c6f6b54c47", 00:19:23.228 "no_auto_visible": false 00:19:23.228 } 00:19:23.228 } 00:19:23.228 }, 00:19:23.228 { 00:19:23.228 "method": "nvmf_subsystem_add_listener", 00:19:23.228 "params": { 00:19:23.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.228 "listen_address": { 00:19:23.228 "trtype": "TCP", 00:19:23.228 "adrfam": "IPv4", 00:19:23.228 "traddr": "10.0.0.2", 00:19:23.228 "trsvcid": "4420" 00:19:23.228 }, 00:19:23.228 "secure_channel": false, 00:19:23.228 "sock_impl": "ssl" 00:19:23.228 } 00:19:23.228 } 00:19:23.228 ] 00:19:23.228 } 00:19:23.228 ] 00:19:23.228 }' 00:19:23.228 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:23.487 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:23.487 "subsystems": [ 00:19:23.487 { 00:19:23.487 "subsystem": "keyring", 00:19:23.487 "config": [ 00:19:23.487 { 00:19:23.487 "method": "keyring_file_add_key", 00:19:23.487 "params": { 00:19:23.487 "name": "key0", 00:19:23.487 "path": "/tmp/tmp.SErSMJc6E0" 00:19:23.487 } 00:19:23.487 } 00:19:23.487 ] 00:19:23.487 }, 00:19:23.487 { 00:19:23.487 "subsystem": "iobuf", 00:19:23.487 "config": [ 00:19:23.487 { 00:19:23.487 "method": "iobuf_set_options", 00:19:23.487 "params": { 00:19:23.487 "small_pool_count": 8192, 00:19:23.487 "large_pool_count": 1024, 00:19:23.487 "small_bufsize": 8192, 00:19:23.487 "large_bufsize": 135168, 00:19:23.487 "enable_numa": false 00:19:23.487 } 00:19:23.487 } 00:19:23.487 ] 00:19:23.487 }, 00:19:23.487 { 00:19:23.487 "subsystem": "sock", 00:19:23.487 "config": [ 00:19:23.487 { 00:19:23.487 "method": "sock_set_default_impl", 00:19:23.487 "params": { 00:19:23.487 "impl_name": "posix" 00:19:23.487 } 00:19:23.487 }, 00:19:23.487 { 00:19:23.487 "method": "sock_impl_set_options", 00:19:23.487 "params": { 00:19:23.487 "impl_name": "ssl", 00:19:23.487 "recv_buf_size": 4096, 00:19:23.487 "send_buf_size": 4096, 00:19:23.487 "enable_recv_pipe": true, 00:19:23.487 "enable_quickack": false, 00:19:23.487 "enable_placement_id": 0, 00:19:23.487 "enable_zerocopy_send_server": true, 00:19:23.487 "enable_zerocopy_send_client": false, 00:19:23.487 "zerocopy_threshold": 0, 00:19:23.487 "tls_version": 0, 00:19:23.487 "enable_ktls": false 00:19:23.487 } 00:19:23.487 }, 00:19:23.487 { 00:19:23.487 "method": "sock_impl_set_options", 00:19:23.487 "params": { 00:19:23.487 "impl_name": "posix", 00:19:23.487 "recv_buf_size": 2097152, 00:19:23.487 "send_buf_size": 2097152, 00:19:23.487 "enable_recv_pipe": true, 00:19:23.487 "enable_quickack": false, 00:19:23.487 "enable_placement_id": 0, 00:19:23.487 "enable_zerocopy_send_server": true, 00:19:23.487 "enable_zerocopy_send_client": false, 00:19:23.487 "zerocopy_threshold": 0, 00:19:23.487 "tls_version": 0, 00:19:23.487 "enable_ktls": false 00:19:23.487 } 00:19:23.487 } 00:19:23.487 ] 00:19:23.487 }, 00:19:23.487 { 00:19:23.487 "subsystem": "vmd", 00:19:23.487 "config": [] 00:19:23.487 }, 00:19:23.487 { 00:19:23.487 "subsystem": "accel", 00:19:23.487 "config": [ 00:19:23.487 { 00:19:23.487 "method": "accel_set_options", 00:19:23.487 "params": { 00:19:23.487 "small_cache_size": 128, 00:19:23.487 "large_cache_size": 16, 00:19:23.487 "task_count": 2048, 00:19:23.487 "sequence_count": 2048, 00:19:23.487 "buf_count": 2048 00:19:23.487 } 00:19:23.487 } 00:19:23.487 ] 00:19:23.487 }, 00:19:23.487 { 00:19:23.487 "subsystem": "bdev", 00:19:23.487 "config": [ 00:19:23.487 { 00:19:23.487 "method": "bdev_set_options", 00:19:23.487 "params": { 00:19:23.487 "bdev_io_pool_size": 65535, 00:19:23.487 "bdev_io_cache_size": 256, 00:19:23.487 "bdev_auto_examine": true, 00:19:23.487 "iobuf_small_cache_size": 128, 00:19:23.487 "iobuf_large_cache_size": 16 00:19:23.487 } 00:19:23.487 }, 00:19:23.487 { 00:19:23.487 "method": "bdev_raid_set_options", 00:19:23.487 "params": { 00:19:23.487 "process_window_size_kb": 1024, 00:19:23.487 "process_max_bandwidth_mb_sec": 0 00:19:23.487 } 00:19:23.487 }, 00:19:23.487 { 00:19:23.487 "method": "bdev_iscsi_set_options", 00:19:23.487 "params": { 00:19:23.487 "timeout_sec": 30 00:19:23.487 } 00:19:23.487 }, 00:19:23.487 { 00:19:23.487 "method": "bdev_nvme_set_options", 00:19:23.487 "params": { 00:19:23.487 "action_on_timeout": "none", 00:19:23.487 "timeout_us": 0, 00:19:23.487 "timeout_admin_us": 0, 00:19:23.487 "keep_alive_timeout_ms": 10000, 00:19:23.487 "arbitration_burst": 0, 00:19:23.487 "low_priority_weight": 0, 00:19:23.487 "medium_priority_weight": 0, 00:19:23.487 "high_priority_weight": 0, 00:19:23.487 "nvme_adminq_poll_period_us": 10000, 00:19:23.487 "nvme_ioq_poll_period_us": 0, 00:19:23.487 "io_queue_requests": 512, 00:19:23.487 "delay_cmd_submit": true, 00:19:23.487 "transport_retry_count": 4, 00:19:23.487 "bdev_retry_count": 3, 00:19:23.487 "transport_ack_timeout": 0, 00:19:23.487 "ctrlr_loss_timeout_sec": 0, 00:19:23.487 "reconnect_delay_sec": 0, 00:19:23.487 "fast_io_fail_timeout_sec": 0, 00:19:23.487 "disable_auto_failback": false, 00:19:23.487 "generate_uuids": false, 00:19:23.487 "transport_tos": 0, 00:19:23.487 "nvme_error_stat": false, 00:19:23.487 "rdma_srq_size": 0, 00:19:23.487 "io_path_stat": false, 00:19:23.488 "allow_accel_sequence": false, 00:19:23.488 "rdma_max_cq_size": 0, 00:19:23.488 "rdma_cm_event_timeout_ms": 0, 00:19:23.488 "dhchap_digests": [ 00:19:23.488 "sha256", 00:19:23.488 "sha384", 00:19:23.488 "sha512" 00:19:23.488 ], 00:19:23.488 "dhchap_dhgroups": [ 00:19:23.488 "null", 00:19:23.488 "ffdhe2048", 00:19:23.488 "ffdhe3072", 00:19:23.488 "ffdhe4096", 00:19:23.488 "ffdhe6144", 00:19:23.488 "ffdhe8192" 00:19:23.488 ] 00:19:23.488 } 00:19:23.488 }, 00:19:23.488 { 00:19:23.488 "method": "bdev_nvme_attach_controller", 00:19:23.488 "params": { 00:19:23.488 "name": "nvme0", 00:19:23.488 "trtype": "TCP", 00:19:23.488 "adrfam": "IPv4", 00:19:23.488 "traddr": "10.0.0.2", 00:19:23.488 "trsvcid": "4420", 00:19:23.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.488 "prchk_reftag": false, 00:19:23.488 "prchk_guard": false, 00:19:23.488 "ctrlr_loss_timeout_sec": 0, 00:19:23.488 "reconnect_delay_sec": 0, 00:19:23.488 "fast_io_fail_timeout_sec": 0, 00:19:23.488 "psk": "key0", 00:19:23.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.488 "hdgst": false, 00:19:23.488 "ddgst": false, 00:19:23.488 "multipath": "multipath" 00:19:23.488 } 00:19:23.488 }, 00:19:23.488 { 00:19:23.488 "method": "bdev_nvme_set_hotplug", 00:19:23.488 "params": { 00:19:23.488 "period_us": 100000, 00:19:23.488 "enable": false 00:19:23.488 } 00:19:23.488 }, 00:19:23.488 { 00:19:23.488 "method": "bdev_enable_histogram", 00:19:23.488 "params": { 00:19:23.488 "name": "nvme0n1", 00:19:23.488 "enable": true 00:19:23.488 } 00:19:23.488 }, 00:19:23.488 { 00:19:23.488 "method": "bdev_wait_for_examine" 00:19:23.488 } 00:19:23.488 ] 00:19:23.488 }, 00:19:23.488 { 00:19:23.488 "subsystem": "nbd", 00:19:23.488 "config": [] 00:19:23.488 } 00:19:23.488 ] 00:19:23.488 }' 00:19:23.488 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1047837 00:19:23.488 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1047837 ']' 00:19:23.488 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1047837 00:19:23.488 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:23.488 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.488 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1047837 00:19:23.488 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:23.488 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:23.488 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1047837' 00:19:23.488 killing process with pid 1047837 00:19:23.488 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1047837 00:19:23.488 Received shutdown signal, test time was about 1.000000 seconds 00:19:23.488 00:19:23.488 Latency(us) 00:19:23.488 [2024-11-15T11:41:03.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.488 [2024-11-15T11:41:03.832Z] =================================================================================================================== 00:19:23.488 [2024-11-15T11:41:03.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.488 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1047837 00:19:23.745 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1047700 00:19:23.745 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1047700 ']' 00:19:23.745 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1047700 00:19:23.745 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:23.745 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.745 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1047700 00:19:23.745 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:23.745 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:23.745 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1047700' 00:19:23.745 killing process with pid 1047700 00:19:23.745 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1047700 00:19:23.746 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1047700 00:19:24.004 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:24.004 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.004 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:24.004 "subsystems": [ 00:19:24.004 { 00:19:24.004 "subsystem": "keyring", 00:19:24.004 "config": [ 00:19:24.004 { 00:19:24.004 "method": "keyring_file_add_key", 00:19:24.004 "params": { 00:19:24.004 "name": "key0", 00:19:24.004 "path": "/tmp/tmp.SErSMJc6E0" 00:19:24.004 } 00:19:24.004 } 00:19:24.004 ] 00:19:24.004 }, 00:19:24.004 { 00:19:24.004 "subsystem": "iobuf", 00:19:24.004 "config": [ 00:19:24.004 { 00:19:24.004 "method": "iobuf_set_options", 00:19:24.004 "params": { 00:19:24.004 "small_pool_count": 8192, 00:19:24.004 "large_pool_count": 1024, 00:19:24.004 "small_bufsize": 8192, 00:19:24.004 "large_bufsize": 135168, 00:19:24.004 "enable_numa": false 00:19:24.004 } 00:19:24.004 } 00:19:24.004 ] 00:19:24.004 }, 00:19:24.004 { 00:19:24.004 "subsystem": "sock", 00:19:24.004 "config": [ 00:19:24.004 { 00:19:24.004 "method": "sock_set_default_impl", 00:19:24.004 "params": { 00:19:24.004 "impl_name": "posix" 00:19:24.004 } 00:19:24.004 }, 00:19:24.004 { 00:19:24.004 "method": "sock_impl_set_options", 00:19:24.004 "params": { 00:19:24.004 "impl_name": "ssl", 00:19:24.004 "recv_buf_size": 4096, 00:19:24.004 "send_buf_size": 4096, 00:19:24.004 "enable_recv_pipe": true, 00:19:24.004 "enable_quickack": false, 00:19:24.004 "enable_placement_id": 0, 00:19:24.004 "enable_zerocopy_send_server": true, 00:19:24.004 "enable_zerocopy_send_client": false, 00:19:24.004 "zerocopy_threshold": 0, 00:19:24.004 "tls_version": 0, 00:19:24.004 "enable_ktls": false 00:19:24.004 } 00:19:24.004 }, 00:19:24.004 { 00:19:24.004 "method": "sock_impl_set_options", 00:19:24.004 "params": { 00:19:24.004 "impl_name": "posix", 00:19:24.004 "recv_buf_size": 2097152, 00:19:24.004 "send_buf_size": 2097152, 00:19:24.004 "enable_recv_pipe": true, 00:19:24.004 "enable_quickack": false, 00:19:24.004 "enable_placement_id": 0, 00:19:24.004 "enable_zerocopy_send_server": true, 00:19:24.004 "enable_zerocopy_send_client": false, 00:19:24.004 "zerocopy_threshold": 0, 00:19:24.004 "tls_version": 0, 00:19:24.004 "enable_ktls": false 00:19:24.004 } 00:19:24.004 } 00:19:24.004 ] 00:19:24.004 }, 00:19:24.004 { 00:19:24.004 "subsystem": "vmd", 00:19:24.004 "config": [] 00:19:24.004 }, 00:19:24.004 { 00:19:24.004 "subsystem": "accel", 00:19:24.004 "config": [ 00:19:24.004 { 00:19:24.004 "method": "accel_set_options", 00:19:24.004 "params": { 00:19:24.004 "small_cache_size": 128, 00:19:24.004 "large_cache_size": 16, 00:19:24.004 "task_count": 2048, 00:19:24.004 "sequence_count": 2048, 00:19:24.004 "buf_count": 2048 00:19:24.004 } 00:19:24.004 } 00:19:24.004 ] 00:19:24.004 }, 00:19:24.004 { 00:19:24.004 "subsystem": "bdev", 00:19:24.004 "config": [ 00:19:24.004 { 00:19:24.004 "method": "bdev_set_options", 00:19:24.004 "params": { 00:19:24.004 "bdev_io_pool_size": 65535, 00:19:24.004 "bdev_io_cache_size": 256, 00:19:24.004 "bdev_auto_examine": true, 00:19:24.004 "iobuf_small_cache_size": 128, 00:19:24.004 "iobuf_large_cache_size": 16 00:19:24.004 } 00:19:24.004 }, 00:19:24.004 { 00:19:24.004 "method": "bdev_raid_set_options", 00:19:24.004 "params": { 00:19:24.004 "process_window_size_kb": 1024, 00:19:24.004 "process_max_bandwidth_mb_sec": 0 00:19:24.004 } 00:19:24.004 }, 00:19:24.004 { 00:19:24.004 "method": "bdev_iscsi_set_options", 00:19:24.004 "params": { 00:19:24.004 "timeout_sec": 30 00:19:24.004 } 00:19:24.004 }, 00:19:24.004 { 00:19:24.004 "method": "bdev_nvme_set_options", 00:19:24.004 "params": { 00:19:24.004 "action_on_timeout": "none", 00:19:24.004 "timeout_us": 0, 00:19:24.004 "timeout_admin_us": 0, 00:19:24.004 "keep_alive_timeout_ms": 10000, 00:19:24.004 "arbitration_burst": 0, 00:19:24.004 "low_priority_weight": 0, 00:19:24.004 "medium_priority_weight": 0, 00:19:24.004 "high_priority_weight": 0, 00:19:24.004 "nvme_adminq_poll_period_us": 10000, 00:19:24.005 "nvme_ioq_poll_period_us": 0, 00:19:24.005 "io_queue_requests": 0, 00:19:24.005 "delay_cmd_submit": true, 00:19:24.005 "transport_retry_count": 4, 00:19:24.005 "bdev_retry_count": 3, 00:19:24.005 "transport_ack_timeout": 0, 00:19:24.005 "ctrlr_loss_timeout_sec": 0, 00:19:24.005 "reconnect_delay_sec": 0, 00:19:24.005 "fast_io_fail_timeout_sec": 0, 00:19:24.005 "disable_auto_failback": false, 00:19:24.005 "generate_uuids": false, 00:19:24.005 "transport_tos": 0, 00:19:24.005 "nvme_error_stat": false, 00:19:24.005 "rdma_srq_size": 0, 00:19:24.005 "io_path_stat": false, 00:19:24.005 "allow_accel_sequence": false, 00:19:24.005 "rdma_max_cq_size": 0, 00:19:24.005 "rdma_cm_event_timeout_ms": 0, 00:19:24.005 "dhchap_digests": [ 00:19:24.005 "sha256", 00:19:24.005 "sha384", 00:19:24.005 "sha512" 00:19:24.005 ], 00:19:24.005 "dhchap_dhgroups": [ 00:19:24.005 "null", 00:19:24.005 "ffdhe2048", 00:19:24.005 "ffdhe3072", 00:19:24.005 "ffdhe4096", 00:19:24.005 "ffdhe6144", 00:19:24.005 "ffdhe8192" 00:19:24.005 ] 00:19:24.005 } 00:19:24.005 }, 00:19:24.005 { 00:19:24.005 "method": "bdev_nvme_set_hotplug", 00:19:24.005 "params": { 00:19:24.005 "period_us": 100000, 00:19:24.005 "enable": false 00:19:24.005 } 00:19:24.005 }, 00:19:24.005 { 00:19:24.005 "method": "bdev_malloc_create", 00:19:24.005 "params": { 00:19:24.005 "name": "malloc0", 00:19:24.005 "num_blocks": 8192, 00:19:24.005 "block_size": 4096, 00:19:24.005 "physical_block_size": 4096, 00:19:24.005 "uuid": "5955dd9f-563b-4165-bc68-39c6f6b54c47", 00:19:24.005 "optimal_io_boundary": 0, 00:19:24.005 "md_size": 0, 00:19:24.005 "dif_type": 0, 00:19:24.005 "dif_is_head_of_md": false, 00:19:24.005 "dif_pi_format": 0 00:19:24.005 } 00:19:24.005 }, 00:19:24.005 { 00:19:24.005 "method": "bdev_wait_for_examine" 00:19:24.005 } 00:19:24.005 ] 00:19:24.005 }, 00:19:24.005 { 00:19:24.005 "subsystem": "nbd", 00:19:24.005 "config": [] 00:19:24.005 }, 00:19:24.005 { 00:19:24.005 "subsystem": "scheduler", 00:19:24.005 "config": [ 00:19:24.005 { 00:19:24.005 "method": "framework_set_scheduler", 00:19:24.005 "params": { 00:19:24.005 "name": "static" 00:19:24.005 } 00:19:24.005 } 00:19:24.005 ] 00:19:24.005 }, 00:19:24.005 { 00:19:24.005 "subsystem": "nvmf", 00:19:24.005 "config": [ 00:19:24.005 { 00:19:24.005 "method": "nvmf_set_config", 00:19:24.005 "params": { 00:19:24.005 "discovery_filter": "match_any", 00:19:24.005 "admin_cmd_passthru": { 00:19:24.005 "identify_ctrlr": false 00:19:24.005 }, 00:19:24.005 "dhchap_digests": [ 00:19:24.005 "sha256", 00:19:24.005 "sha384", 00:19:24.005 "sha512" 00:19:24.005 ], 00:19:24.005 "dhchap_dhgroups": [ 00:19:24.005 "null", 00:19:24.005 "ffdhe2048", 00:19:24.005 "ffdhe3072", 00:19:24.005 "ffdhe4096", 00:19:24.005 "ffdhe6144", 00:19:24.005 "ffdhe8192" 00:19:24.005 ] 00:19:24.005 } 00:19:24.005 }, 00:19:24.005 { 00:19:24.005 "method": "nvmf_set_max_subsystems", 00:19:24.005 "params": { 00:19:24.005 "max_subsystems": 1024 00:19:24.005 } 00:19:24.005 }, 00:19:24.005 { 00:19:24.005 "method": "nvmf_set_crdt", 00:19:24.005 "params": { 00:19:24.005 "crdt1": 0, 00:19:24.005 "crdt2": 0, 00:19:24.005 "crdt3": 0 00:19:24.005 } 00:19:24.005 }, 00:19:24.005 { 00:19:24.005 "method": "nvmf_create_transport", 00:19:24.005 "params": { 00:19:24.005 "trtype": "TCP", 00:19:24.005 "max_queue_depth": 128, 00:19:24.005 "max_io_qpairs_per_ctrlr": 127, 00:19:24.005 "in_capsule_data_size": 4096, 00:19:24.005 "max_io_size": 131072, 00:19:24.005 "io_unit_size": 131072, 00:19:24.005 "max_aq_depth": 128, 00:19:24.005 "num_shared_buffers": 511, 00:19:24.005 "buf_cache_size": 4294967295, 00:19:24.005 "dif_insert_or_strip": false, 00:19:24.005 "zcopy": false, 00:19:24.005 "c2h_success": false, 00:19:24.005 "sock_priority": 0, 00:19:24.005 "abort_timeout_sec": 1, 00:19:24.005 "ack_timeout": 0, 00:19:24.005 "data_wr_pool_size": 0 00:19:24.005 } 00:19:24.005 }, 00:19:24.005 { 00:19:24.005 "method": "nvmf_create_subsystem", 00:19:24.005 "params": { 00:19:24.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.005 "allow_any_host": false, 00:19:24.005 "serial_number": "00000000000000000000", 00:19:24.005 "model_number": "SPDK bdev Controller", 00:19:24.005 "max_namespaces": 32, 00:19:24.005 "min_cntlid": 1, 00:19:24.005 "max_cntlid": 65519, 00:19:24.005 "ana_reporting": false 00:19:24.005 } 00:19:24.005 }, 00:19:24.005 { 00:19:24.005 "method": "nvmf_subsystem_add_host", 00:19:24.005 "params": { 00:19:24.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.005 "host": "nqn.2016-06.io.spdk:host1", 00:19:24.005 "psk": "key0" 00:19:24.005 } 00:19:24.005 }, 00:19:24.005 { 00:19:24.005 "method": "nvmf_subsystem_add_ns", 00:19:24.005 "params": { 00:19:24.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.005 "namespace": { 00:19:24.005 "nsid": 1, 00:19:24.005 "bdev_name": "malloc0", 00:19:24.005 "nguid": "5955DD9F563B4165BC6839C6F6B54C47", 00:19:24.005 "uuid": "5955dd9f-563b-4165-bc68-39c6f6b54c47", 00:19:24.005 "no_auto_visible": false 00:19:24.005 } 00:19:24.005 } 00:19:24.005 }, 00:19:24.005 { 00:19:24.005 "method": "nvmf_subsystem_add_listener", 00:19:24.005 "params": { 00:19:24.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.005 "listen_address": { 00:19:24.005 "trtype": "TCP", 00:19:24.005 "adrfam": "IPv4", 00:19:24.005 "traddr": "10.0.0.2", 00:19:24.005 "trsvcid": "4420" 00:19:24.005 }, 00:19:24.005 "secure_channel": false, 00:19:24.005 "sock_impl": "ssl" 00:19:24.005 } 00:19:24.005 } 00:19:24.005 ] 00:19:24.005 } 00:19:24.005 ] 00:19:24.005 }' 00:19:24.005 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.005 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.005 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1048132 00:19:24.005 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:24.005 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1048132 00:19:24.005 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1048132 ']' 00:19:24.005 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.005 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.005 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.005 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.005 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.005 [2024-11-15 12:41:04.313983] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:19:24.005 [2024-11-15 12:41:04.314080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.263 [2024-11-15 12:41:04.387964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.263 [2024-11-15 12:41:04.443006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.263 [2024-11-15 12:41:04.443058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.263 [2024-11-15 12:41:04.443085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.263 [2024-11-15 12:41:04.443096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.263 [2024-11-15 12:41:04.443105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.263 [2024-11-15 12:41:04.443691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.524 [2024-11-15 12:41:04.683883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.524 [2024-11-15 12:41:04.715895] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:24.524 [2024-11-15 12:41:04.716140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.090 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.090 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:25.090 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:25.090 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.090 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.090 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.090 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1048279 00:19:25.090 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1048279 /var/tmp/bdevperf.sock 00:19:25.090 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1048279 ']' 00:19:25.090 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.090 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:25.090 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.090 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.090 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:25.090 "subsystems": [ 00:19:25.090 { 00:19:25.090 "subsystem": "keyring", 00:19:25.090 "config": [ 00:19:25.090 { 00:19:25.090 "method": "keyring_file_add_key", 00:19:25.090 "params": { 00:19:25.090 "name": "key0", 00:19:25.090 "path": "/tmp/tmp.SErSMJc6E0" 00:19:25.090 } 00:19:25.090 } 00:19:25.090 ] 00:19:25.090 }, 00:19:25.090 { 00:19:25.090 "subsystem": "iobuf", 00:19:25.090 "config": [ 00:19:25.090 { 00:19:25.090 "method": "iobuf_set_options", 00:19:25.090 "params": { 00:19:25.090 "small_pool_count": 8192, 00:19:25.090 "large_pool_count": 1024, 00:19:25.090 "small_bufsize": 8192, 00:19:25.090 "large_bufsize": 135168, 00:19:25.090 "enable_numa": false 00:19:25.090 } 00:19:25.090 } 00:19:25.090 ] 00:19:25.090 }, 00:19:25.090 { 00:19:25.090 "subsystem": "sock", 00:19:25.090 "config": [ 00:19:25.090 { 00:19:25.090 "method": "sock_set_default_impl", 00:19:25.090 "params": { 00:19:25.090 "impl_name": "posix" 00:19:25.090 } 00:19:25.090 }, 00:19:25.090 { 00:19:25.090 "method": "sock_impl_set_options", 00:19:25.090 "params": { 00:19:25.090 "impl_name": "ssl", 00:19:25.090 "recv_buf_size": 4096, 00:19:25.090 "send_buf_size": 4096, 00:19:25.090 "enable_recv_pipe": true, 00:19:25.090 "enable_quickack": false, 00:19:25.090 "enable_placement_id": 0, 00:19:25.090 "enable_zerocopy_send_server": true, 00:19:25.090 "enable_zerocopy_send_client": false, 00:19:25.090 "zerocopy_threshold": 0, 00:19:25.090 "tls_version": 0, 00:19:25.090 "enable_ktls": false 00:19:25.090 } 00:19:25.090 }, 00:19:25.090 { 00:19:25.090 "method": "sock_impl_set_options", 00:19:25.090 "params": { 00:19:25.090 "impl_name": "posix", 00:19:25.090 "recv_buf_size": 2097152, 00:19:25.090 "send_buf_size": 2097152, 00:19:25.090 "enable_recv_pipe": true, 00:19:25.090 "enable_quickack": false, 00:19:25.090 "enable_placement_id": 0, 00:19:25.090 "enable_zerocopy_send_server": true, 00:19:25.090 "enable_zerocopy_send_client": false, 00:19:25.090 "zerocopy_threshold": 0, 00:19:25.090 "tls_version": 0, 00:19:25.090 "enable_ktls": false 00:19:25.090 } 00:19:25.090 } 00:19:25.090 ] 00:19:25.090 }, 00:19:25.090 { 00:19:25.090 "subsystem": "vmd", 00:19:25.090 "config": [] 00:19:25.090 }, 00:19:25.090 { 00:19:25.090 "subsystem": "accel", 00:19:25.090 "config": [ 00:19:25.090 { 00:19:25.090 "method": "accel_set_options", 00:19:25.090 "params": { 00:19:25.090 "small_cache_size": 128, 00:19:25.090 "large_cache_size": 16, 00:19:25.090 "task_count": 2048, 00:19:25.090 "sequence_count": 2048, 00:19:25.090 "buf_count": 2048 00:19:25.090 } 00:19:25.090 } 00:19:25.090 ] 00:19:25.090 }, 00:19:25.090 { 00:19:25.090 "subsystem": "bdev", 00:19:25.090 "config": [ 00:19:25.090 { 00:19:25.090 "method": "bdev_set_options", 00:19:25.090 "params": { 00:19:25.090 "bdev_io_pool_size": 65535, 00:19:25.090 "bdev_io_cache_size": 256, 00:19:25.090 "bdev_auto_examine": true, 00:19:25.090 "iobuf_small_cache_size": 128, 00:19:25.090 "iobuf_large_cache_size": 16 00:19:25.090 } 00:19:25.090 }, 00:19:25.090 { 00:19:25.090 "method": "bdev_raid_set_options", 00:19:25.090 "params": { 00:19:25.090 "process_window_size_kb": 1024, 00:19:25.090 "process_max_bandwidth_mb_sec": 0 00:19:25.090 } 00:19:25.090 }, 00:19:25.090 { 00:19:25.090 "method": "bdev_iscsi_set_options", 00:19:25.090 "params": { 00:19:25.090 "timeout_sec": 30 00:19:25.090 } 00:19:25.090 }, 00:19:25.090 { 00:19:25.090 "method": "bdev_nvme_set_options", 00:19:25.090 "params": { 00:19:25.090 "action_on_timeout": "none", 00:19:25.090 "timeout_us": 0, 00:19:25.090 "timeout_admin_us": 0, 00:19:25.090 "keep_alive_timeout_ms": 10000, 00:19:25.090 "arbitration_burst": 0, 00:19:25.090 "low_priority_weight": 0, 00:19:25.090 "medium_priority_weight": 0, 00:19:25.090 "high_priority_weight": 0, 00:19:25.090 "nvme_adminq_poll_period_us": 10000, 00:19:25.090 "nvme_ioq_poll_period_us": 0, 00:19:25.090 "io_queue_requests": 512, 00:19:25.090 "delay_cmd_submit": true, 00:19:25.090 "transport_retry_count": 4, 00:19:25.090 "bdev_retry_count": 3, 00:19:25.090 "transport_ack_timeout": 0, 00:19:25.090 "ctrlr_loss_timeout_sec": 0, 00:19:25.090 "reconnect_delay_sec": 0, 00:19:25.090 "fast_io_fail_timeout_sec": 0, 00:19:25.090 "disable_auto_failback": false, 00:19:25.090 "generate_uuids": false, 00:19:25.090 "transport_tos": 0, 00:19:25.091 "nvme_error_stat": false, 00:19:25.091 "rdma_srq_size": 0, 00:19:25.091 "io_path_stat": false, 00:19:25.091 "allow_accel_sequence": false, 00:19:25.091 "rdma_max_cq_size": 0, 00:19:25.091 "rdma_cm_event_timeout_ms": 0Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.091 , 00:19:25.091 "dhchap_digests": [ 00:19:25.091 "sha256", 00:19:25.091 "sha384", 00:19:25.091 "sha512" 00:19:25.091 ], 00:19:25.091 "dhchap_dhgroups": [ 00:19:25.091 "null", 00:19:25.091 "ffdhe2048", 00:19:25.091 "ffdhe3072", 00:19:25.091 "ffdhe4096", 00:19:25.091 "ffdhe6144", 00:19:25.091 "ffdhe8192" 00:19:25.091 ] 00:19:25.091 } 00:19:25.091 }, 00:19:25.091 { 00:19:25.091 "method": "bdev_nvme_attach_controller", 00:19:25.091 "params": { 00:19:25.091 "name": "nvme0", 00:19:25.091 "trtype": "TCP", 00:19:25.091 "adrfam": "IPv4", 00:19:25.091 "traddr": "10.0.0.2", 00:19:25.091 "trsvcid": "4420", 00:19:25.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.091 "prchk_reftag": false, 00:19:25.091 "prchk_guard": false, 00:19:25.091 "ctrlr_loss_timeout_sec": 0, 00:19:25.091 "reconnect_delay_sec": 0, 00:19:25.091 "fast_io_fail_timeout_sec": 0, 00:19:25.091 "psk": "key0", 00:19:25.091 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.091 "hdgst": false, 00:19:25.091 "ddgst": false, 00:19:25.091 "multipath": "multipath" 00:19:25.091 } 00:19:25.091 }, 00:19:25.091 { 00:19:25.091 "method": "bdev_nvme_set_hotplug", 00:19:25.091 "params": { 00:19:25.091 "period_us": 100000, 00:19:25.091 "enable": false 00:19:25.091 } 00:19:25.091 }, 00:19:25.091 { 00:19:25.091 "method": "bdev_enable_histogram", 00:19:25.091 "params": { 00:19:25.091 "name": "nvme0n1", 00:19:25.091 "enable": true 00:19:25.091 } 00:19:25.091 }, 00:19:25.091 { 00:19:25.091 "method": "bdev_wait_for_examine" 00:19:25.091 } 00:19:25.091 ] 00:19:25.091 }, 00:19:25.091 { 00:19:25.091 "subsystem": "nbd", 00:19:25.091 "config": [] 00:19:25.091 } 00:19:25.091 ] 00:19:25.091 }' 00:19:25.091 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.091 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.091 [2024-11-15 12:41:05.374950] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:19:25.091 [2024-11-15 12:41:05.375040] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1048279 ] 00:19:25.349 [2024-11-15 12:41:05.446057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.349 [2024-11-15 12:41:05.506251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.349 [2024-11-15 12:41:05.689500] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.607 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.608 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:25.608 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:25.608 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:25.866 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.866 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:25.866 Running I/O for 1 seconds... 00:19:27.241 3597.00 IOPS, 14.05 MiB/s 00:19:27.241 Latency(us) 00:19:27.241 [2024-11-15T11:41:07.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.241 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:27.241 Verification LBA range: start 0x0 length 0x2000 00:19:27.241 nvme0n1 : 1.02 3644.89 14.24 0.00 0.00 34778.66 6844.87 30486.38 00:19:27.241 [2024-11-15T11:41:07.585Z] =================================================================================================================== 00:19:27.241 [2024-11-15T11:41:07.585Z] Total : 3644.89 14.24 0.00 0.00 34778.66 6844.87 30486.38 00:19:27.241 { 00:19:27.241 "results": [ 00:19:27.241 { 00:19:27.241 "job": "nvme0n1", 00:19:27.241 "core_mask": "0x2", 00:19:27.241 "workload": "verify", 00:19:27.241 "status": "finished", 00:19:27.241 "verify_range": { 00:19:27.241 "start": 0, 00:19:27.241 "length": 8192 00:19:27.241 }, 00:19:27.241 "queue_depth": 128, 00:19:27.241 "io_size": 4096, 00:19:27.241 "runtime": 1.021979, 00:19:27.241 "iops": 3644.888984998713, 00:19:27.241 "mibps": 14.237847597651223, 00:19:27.241 "io_failed": 0, 00:19:27.241 "io_timeout": 0, 00:19:27.241 "avg_latency_us": 34778.663376783494, 00:19:27.241 "min_latency_us": 6844.8711111111115, 00:19:27.241 "max_latency_us": 30486.376296296297 00:19:27.241 } 00:19:27.241 ], 00:19:27.241 "core_count": 1 00:19:27.241 } 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:27.241 nvmf_trace.0 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1048279 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1048279 ']' 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1048279 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1048279 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1048279' 00:19:27.241 killing process with pid 1048279 00:19:27.241 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1048279 00:19:27.241 Received shutdown signal, test time was about 1.000000 seconds 00:19:27.241 00:19:27.242 Latency(us) 00:19:27.242 [2024-11-15T11:41:07.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.242 [2024-11-15T11:41:07.586Z] =================================================================================================================== 00:19:27.242 [2024-11-15T11:41:07.586Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.242 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1048279 00:19:27.242 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:27.242 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:27.242 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:27.242 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:27.242 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:27.242 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:27.242 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:27.242 rmmod nvme_tcp 00:19:27.500 rmmod nvme_fabrics 00:19:27.500 rmmod nvme_keyring 00:19:27.500 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:27.500 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:27.500 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:27.500 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1048132 ']' 00:19:27.500 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1048132 00:19:27.500 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1048132 ']' 00:19:27.500 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1048132 00:19:27.500 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:27.500 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.500 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1048132 00:19:27.500 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.500 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.500 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1048132' 00:19:27.500 killing process with pid 1048132 00:19:27.500 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1048132 00:19:27.500 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1048132 00:19:27.758 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:27.758 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:27.758 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:27.758 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:27.758 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:27.758 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:27.758 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:27.758 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:27.758 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:27.758 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.758 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.758 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.665 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:29.665 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.WNswyHnNPZ /tmp/tmp.ctiDL7bCbM /tmp/tmp.SErSMJc6E0 00:19:29.665 00:19:29.665 real 1m22.945s 00:19:29.665 user 2m20.205s 00:19:29.665 sys 0m24.352s 00:19:29.665 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.665 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.665 ************************************ 00:19:29.665 END TEST nvmf_tls 00:19:29.665 ************************************ 00:19:29.665 12:41:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:29.665 12:41:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:29.665 12:41:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.665 12:41:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:29.665 ************************************ 00:19:29.665 START TEST nvmf_fips 00:19:29.665 ************************************ 00:19:29.665 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:29.927 * Looking for test storage... 00:19:29.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:29.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.927 --rc genhtml_branch_coverage=1 00:19:29.927 --rc genhtml_function_coverage=1 00:19:29.927 --rc genhtml_legend=1 00:19:29.927 --rc geninfo_all_blocks=1 00:19:29.927 --rc geninfo_unexecuted_blocks=1 00:19:29.927 00:19:29.927 ' 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:29.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.927 --rc genhtml_branch_coverage=1 00:19:29.927 --rc genhtml_function_coverage=1 00:19:29.927 --rc genhtml_legend=1 00:19:29.927 --rc geninfo_all_blocks=1 00:19:29.927 --rc geninfo_unexecuted_blocks=1 00:19:29.927 00:19:29.927 ' 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:29.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.927 --rc genhtml_branch_coverage=1 00:19:29.927 --rc genhtml_function_coverage=1 00:19:29.927 --rc genhtml_legend=1 00:19:29.927 --rc geninfo_all_blocks=1 00:19:29.927 --rc geninfo_unexecuted_blocks=1 00:19:29.927 00:19:29.927 ' 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:29.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.927 --rc genhtml_branch_coverage=1 00:19:29.927 --rc genhtml_function_coverage=1 00:19:29.927 --rc genhtml_legend=1 00:19:29.927 --rc geninfo_all_blocks=1 00:19:29.927 --rc geninfo_unexecuted_blocks=1 00:19:29.927 00:19:29.927 ' 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.927 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:29.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:29.928 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:29.929 Error setting digest 00:19:29.929 4012E13C837F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:29.929 4012E13C837F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:29.929 12:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:32.463 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:32.463 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:32.463 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:32.463 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:32.463 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:32.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:19:32.463 00:19:32.463 --- 10.0.0.2 ping statistics --- 00:19:32.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.463 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:32.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:19:32.464 00:19:32.464 --- 10.0.0.1 ping statistics --- 00:19:32.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.464 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1050633 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1050633 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1050633 ']' 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.464 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:32.464 [2024-11-15 12:41:12.607910] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:19:32.464 [2024-11-15 12:41:12.607985] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.464 [2024-11-15 12:41:12.680775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.464 [2024-11-15 12:41:12.736293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.464 [2024-11-15 12:41:12.736344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.464 [2024-11-15 12:41:12.736367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.464 [2024-11-15 12:41:12.736377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.464 [2024-11-15 12:41:12.736387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.464 [2024-11-15 12:41:12.736985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.723 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.723 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:32.723 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:32.723 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:32.723 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:32.723 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.723 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:32.723 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:32.723 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:32.723 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.mez 00:19:32.723 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:32.723 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.mez 00:19:32.723 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.mez 00:19:32.723 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.mez 00:19:32.723 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.981 [2024-11-15 12:41:13.158635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.981 [2024-11-15 12:41:13.174631] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:32.981 [2024-11-15 12:41:13.174876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.981 malloc0 00:19:32.981 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.981 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1050672 00:19:32.981 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.981 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1050672 /var/tmp/bdevperf.sock 00:19:32.981 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1050672 ']' 00:19:32.981 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.981 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.981 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.981 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.981 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:32.981 [2024-11-15 12:41:13.309588] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:19:32.981 [2024-11-15 12:41:13.309694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050672 ] 00:19:33.240 [2024-11-15 12:41:13.376589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.240 [2024-11-15 12:41:13.434799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.240 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.240 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:33.240 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.mez 00:19:33.497 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:33.754 [2024-11-15 12:41:14.093398] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:34.012 TLSTESTn1 00:19:34.012 12:41:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:34.012 Running I/O for 10 seconds... 00:19:36.317 3079.00 IOPS, 12.03 MiB/s [2024-11-15T11:41:17.595Z] 3162.50 IOPS, 12.35 MiB/s [2024-11-15T11:41:18.528Z] 3172.00 IOPS, 12.39 MiB/s [2024-11-15T11:41:19.463Z] 3211.50 IOPS, 12.54 MiB/s [2024-11-15T11:41:20.397Z] 3221.60 IOPS, 12.58 MiB/s [2024-11-15T11:41:21.330Z] 3220.83 IOPS, 12.58 MiB/s [2024-11-15T11:41:22.704Z] 3226.43 IOPS, 12.60 MiB/s [2024-11-15T11:41:23.638Z] 3218.75 IOPS, 12.57 MiB/s [2024-11-15T11:41:24.571Z] 3215.78 IOPS, 12.56 MiB/s [2024-11-15T11:41:24.571Z] 3217.60 IOPS, 12.57 MiB/s 00:19:44.227 Latency(us) 00:19:44.227 [2024-11-15T11:41:24.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.227 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:44.227 Verification LBA range: start 0x0 length 0x2000 00:19:44.227 TLSTESTn1 : 10.02 3222.37 12.59 0.00 0.00 39647.41 7475.96 47768.46 00:19:44.227 [2024-11-15T11:41:24.571Z] =================================================================================================================== 00:19:44.227 [2024-11-15T11:41:24.571Z] Total : 3222.37 12.59 0.00 0.00 39647.41 7475.96 47768.46 00:19:44.227 { 00:19:44.227 "results": [ 00:19:44.227 { 00:19:44.227 "job": "TLSTESTn1", 00:19:44.227 "core_mask": "0x4", 00:19:44.227 "workload": "verify", 00:19:44.227 "status": "finished", 00:19:44.227 "verify_range": { 00:19:44.227 "start": 0, 00:19:44.227 "length": 8192 00:19:44.227 }, 00:19:44.227 "queue_depth": 128, 00:19:44.227 "io_size": 4096, 00:19:44.227 "runtime": 10.024604, 00:19:44.227 "iops": 3222.371676726582, 00:19:44.227 "mibps": 12.58738936221321, 00:19:44.227 "io_failed": 0, 00:19:44.227 "io_timeout": 0, 00:19:44.227 "avg_latency_us": 39647.4056235575, 00:19:44.227 "min_latency_us": 7475.958518518519, 00:19:44.227 "max_latency_us": 47768.462222222224 00:19:44.227 } 00:19:44.227 ], 00:19:44.227 "core_count": 1 00:19:44.227 } 00:19:44.227 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:44.227 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:44.227 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:44.227 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:44.227 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:44.227 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:44.227 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:44.227 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:44.227 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:44.227 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:44.227 nvmf_trace.0 00:19:44.228 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:44.228 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1050672 00:19:44.228 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1050672 ']' 00:19:44.228 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1050672 00:19:44.228 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:44.228 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.228 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1050672 00:19:44.228 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:44.228 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:44.228 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1050672' 00:19:44.228 killing process with pid 1050672 00:19:44.228 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1050672 00:19:44.228 Received shutdown signal, test time was about 10.000000 seconds 00:19:44.228 00:19:44.228 Latency(us) 00:19:44.228 [2024-11-15T11:41:24.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.228 [2024-11-15T11:41:24.572Z] =================================================================================================================== 00:19:44.228 [2024-11-15T11:41:24.572Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:44.228 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1050672 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:44.486 rmmod nvme_tcp 00:19:44.486 rmmod nvme_fabrics 00:19:44.486 rmmod nvme_keyring 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1050633 ']' 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1050633 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1050633 ']' 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1050633 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1050633 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1050633' 00:19:44.486 killing process with pid 1050633 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1050633 00:19:44.486 12:41:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1050633 00:19:44.745 12:41:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:44.745 12:41:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:44.745 12:41:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:44.745 12:41:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:44.745 12:41:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:44.745 12:41:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:44.745 12:41:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:44.745 12:41:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:44.745 12:41:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:44.746 12:41:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.746 12:41:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.746 12:41:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.mez 00:19:47.283 00:19:47.283 real 0m17.083s 00:19:47.283 user 0m19.214s 00:19:47.283 sys 0m6.853s 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:47.283 ************************************ 00:19:47.283 END TEST nvmf_fips 00:19:47.283 ************************************ 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:47.283 ************************************ 00:19:47.283 START TEST nvmf_control_msg_list 00:19:47.283 ************************************ 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:47.283 * Looking for test storage... 00:19:47.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:47.283 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:47.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.284 --rc genhtml_branch_coverage=1 00:19:47.284 --rc genhtml_function_coverage=1 00:19:47.284 --rc genhtml_legend=1 00:19:47.284 --rc geninfo_all_blocks=1 00:19:47.284 --rc geninfo_unexecuted_blocks=1 00:19:47.284 00:19:47.284 ' 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:47.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.284 --rc genhtml_branch_coverage=1 00:19:47.284 --rc genhtml_function_coverage=1 00:19:47.284 --rc genhtml_legend=1 00:19:47.284 --rc geninfo_all_blocks=1 00:19:47.284 --rc geninfo_unexecuted_blocks=1 00:19:47.284 00:19:47.284 ' 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:47.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.284 --rc genhtml_branch_coverage=1 00:19:47.284 --rc genhtml_function_coverage=1 00:19:47.284 --rc genhtml_legend=1 00:19:47.284 --rc geninfo_all_blocks=1 00:19:47.284 --rc geninfo_unexecuted_blocks=1 00:19:47.284 00:19:47.284 ' 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:47.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.284 --rc genhtml_branch_coverage=1 00:19:47.284 --rc genhtml_function_coverage=1 00:19:47.284 --rc genhtml_legend=1 00:19:47.284 --rc geninfo_all_blocks=1 00:19:47.284 --rc geninfo_unexecuted_blocks=1 00:19:47.284 00:19:47.284 ' 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:47.284 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:47.285 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:47.285 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:49.190 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.190 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:49.191 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:49.191 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:49.191 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.191 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.450 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.450 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.450 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:49.450 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.450 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.450 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.450 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:49.450 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:49.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:19:49.451 00:19:49.451 --- 10.0.0.2 ping statistics --- 00:19:49.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.451 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:19:49.451 00:19:49.451 --- 10.0.0.1 ping statistics --- 00:19:49.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.451 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1054050 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1054050 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1054050 ']' 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.451 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:49.451 [2024-11-15 12:41:29.704771] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:19:49.451 [2024-11-15 12:41:29.704841] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.451 [2024-11-15 12:41:29.773790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.711 [2024-11-15 12:41:29.830147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.711 [2024-11-15 12:41:29.830195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.711 [2024-11-15 12:41:29.830217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.711 [2024-11-15 12:41:29.830228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.711 [2024-11-15 12:41:29.830238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.711 [2024-11-15 12:41:29.830843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:49.711 [2024-11-15 12:41:29.977578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.711 12:41:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:49.711 Malloc0 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:49.711 [2024-11-15 12:41:30.018678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1054070 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1054071 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1054072 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1054070 00:19:49.711 12:41:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:49.970 [2024-11-15 12:41:30.097706] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:49.970 [2024-11-15 12:41:30.098031] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:49.970 [2024-11-15 12:41:30.098295] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:50.904 Initializing NVMe Controllers 00:19:50.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:50.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:50.904 Initialization complete. Launching workers. 00:19:50.904 ======================================================== 00:19:50.904 Latency(us) 00:19:50.904 Device Information : IOPS MiB/s Average min max 00:19:50.904 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40883.30 40516.63 40997.33 00:19:50.904 ======================================================== 00:19:50.904 Total : 25.00 0.10 40883.30 40516.63 40997.33 00:19:50.904 00:19:50.904 Initializing NVMe Controllers 00:19:50.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:50.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:50.904 Initialization complete. Launching workers. 00:19:50.904 ======================================================== 00:19:50.904 Latency(us) 00:19:50.904 Device Information : IOPS MiB/s Average min max 00:19:50.904 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3685.00 14.39 270.99 168.56 41210.00 00:19:50.904 ======================================================== 00:19:50.904 Total : 3685.00 14.39 270.99 168.56 41210.00 00:19:50.904 00:19:50.904 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1054071 00:19:50.904 Initializing NVMe Controllers 00:19:50.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:50.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:50.904 Initialization complete. Launching workers. 00:19:50.904 ======================================================== 00:19:50.904 Latency(us) 00:19:50.904 Device Information : IOPS MiB/s Average min max 00:19:50.904 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3978.00 15.54 250.98 157.83 459.55 00:19:50.904 ======================================================== 00:19:50.904 Total : 3978.00 15.54 250.98 157.83 459.55 00:19:50.904 00:19:50.904 [2024-11-15 12:41:31.240641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fd410 is same with the state(6) to be set 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1054072 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:51.163 rmmod nvme_tcp 00:19:51.163 rmmod nvme_fabrics 00:19:51.163 rmmod nvme_keyring 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1054050 ']' 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1054050 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1054050 ']' 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1054050 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1054050 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1054050' 00:19:51.163 killing process with pid 1054050 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1054050 00:19:51.163 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1054050 00:19:51.423 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:51.423 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:51.423 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:51.423 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:51.423 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:51.423 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:51.423 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:51.423 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:51.423 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:51.423 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.423 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.423 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.330 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:53.330 00:19:53.330 real 0m6.471s 00:19:53.330 user 0m5.658s 00:19:53.330 sys 0m2.700s 00:19:53.330 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.330 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:53.330 ************************************ 00:19:53.330 END TEST nvmf_control_msg_list 00:19:53.330 ************************************ 00:19:53.330 12:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:53.330 12:41:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:53.330 12:41:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.330 12:41:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:53.330 ************************************ 00:19:53.330 START TEST nvmf_wait_for_buf 00:19:53.330 ************************************ 00:19:53.330 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:53.589 * Looking for test storage... 00:19:53.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:53.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.589 --rc genhtml_branch_coverage=1 00:19:53.589 --rc genhtml_function_coverage=1 00:19:53.589 --rc genhtml_legend=1 00:19:53.589 --rc geninfo_all_blocks=1 00:19:53.589 --rc geninfo_unexecuted_blocks=1 00:19:53.589 00:19:53.589 ' 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:53.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.589 --rc genhtml_branch_coverage=1 00:19:53.589 --rc genhtml_function_coverage=1 00:19:53.589 --rc genhtml_legend=1 00:19:53.589 --rc geninfo_all_blocks=1 00:19:53.589 --rc geninfo_unexecuted_blocks=1 00:19:53.589 00:19:53.589 ' 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:53.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.589 --rc genhtml_branch_coverage=1 00:19:53.589 --rc genhtml_function_coverage=1 00:19:53.589 --rc genhtml_legend=1 00:19:53.589 --rc geninfo_all_blocks=1 00:19:53.589 --rc geninfo_unexecuted_blocks=1 00:19:53.589 00:19:53.589 ' 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:53.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.589 --rc genhtml_branch_coverage=1 00:19:53.589 --rc genhtml_function_coverage=1 00:19:53.589 --rc genhtml_legend=1 00:19:53.589 --rc geninfo_all_blocks=1 00:19:53.589 --rc geninfo_unexecuted_blocks=1 00:19:53.589 00:19:53.589 ' 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.589 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:53.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:53.590 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:56.123 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:56.123 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:56.123 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:56.124 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:56.124 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:56.124 12:41:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:56.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:19:56.124 00:19:56.124 --- 10.0.0.2 ping statistics --- 00:19:56.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.124 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:56.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:19:56.124 00:19:56.124 --- 10.0.0.1 ping statistics --- 00:19:56.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.124 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1056165 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1056165 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1056165 ']' 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.124 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:56.124 [2024-11-15 12:41:36.163678] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:19:56.124 [2024-11-15 12:41:36.163808] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.124 [2024-11-15 12:41:36.238382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.124 [2024-11-15 12:41:36.293475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.125 [2024-11-15 12:41:36.293525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.125 [2024-11-15 12:41:36.293548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.125 [2024-11-15 12:41:36.293559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.125 [2024-11-15 12:41:36.293568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.125 [2024-11-15 12:41:36.294110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.125 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:56.384 Malloc0 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:56.384 [2024-11-15 12:41:36.536333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:56.384 [2024-11-15 12:41:36.560531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.384 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:56.384 [2024-11-15 12:41:36.648851] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:58.283 Initializing NVMe Controllers 00:19:58.283 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:58.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:58.283 Initialization complete. Launching workers. 00:19:58.283 ======================================================== 00:19:58.283 Latency(us) 00:19:58.283 Device Information : IOPS MiB/s Average min max 00:19:58.283 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 42.00 5.25 100127.72 31897.28 191498.60 00:19:58.283 ======================================================== 00:19:58.283 Total : 42.00 5.25 100127.72 31897.28 191498.60 00:19:58.283 00:19:58.283 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:58.283 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.283 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=646 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 646 -eq 0 ]] 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:58.284 rmmod nvme_tcp 00:19:58.284 rmmod nvme_fabrics 00:19:58.284 rmmod nvme_keyring 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1056165 ']' 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1056165 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1056165 ']' 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1056165 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1056165 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1056165' 00:19:58.284 killing process with pid 1056165 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1056165 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1056165 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:58.284 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.816 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:00.816 00:20:00.816 real 0m6.914s 00:20:00.816 user 0m3.285s 00:20:00.816 sys 0m2.095s 00:20:00.816 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.816 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:00.816 ************************************ 00:20:00.816 END TEST nvmf_wait_for_buf 00:20:00.816 ************************************ 00:20:00.816 12:41:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:00.816 12:41:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:00.816 12:41:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:00.816 12:41:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:00.816 12:41:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:00.816 12:41:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:02.719 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:02.720 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:02.720 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:02.720 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:02.720 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:02.720 ************************************ 00:20:02.720 START TEST nvmf_perf_adq 00:20:02.720 ************************************ 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:02.720 * Looking for test storage... 00:20:02.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:02.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.720 --rc genhtml_branch_coverage=1 00:20:02.720 --rc genhtml_function_coverage=1 00:20:02.720 --rc genhtml_legend=1 00:20:02.720 --rc geninfo_all_blocks=1 00:20:02.720 --rc geninfo_unexecuted_blocks=1 00:20:02.720 00:20:02.720 ' 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:02.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.720 --rc genhtml_branch_coverage=1 00:20:02.720 --rc genhtml_function_coverage=1 00:20:02.720 --rc genhtml_legend=1 00:20:02.720 --rc geninfo_all_blocks=1 00:20:02.720 --rc geninfo_unexecuted_blocks=1 00:20:02.720 00:20:02.720 ' 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:02.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.720 --rc genhtml_branch_coverage=1 00:20:02.720 --rc genhtml_function_coverage=1 00:20:02.720 --rc genhtml_legend=1 00:20:02.720 --rc geninfo_all_blocks=1 00:20:02.720 --rc geninfo_unexecuted_blocks=1 00:20:02.720 00:20:02.720 ' 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:02.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.720 --rc genhtml_branch_coverage=1 00:20:02.720 --rc genhtml_function_coverage=1 00:20:02.720 --rc genhtml_legend=1 00:20:02.720 --rc geninfo_all_blocks=1 00:20:02.720 --rc geninfo_unexecuted_blocks=1 00:20:02.720 00:20:02.720 ' 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.720 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:02.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:02.721 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:05.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:05.255 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:05.255 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:05.256 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:05.256 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:05.256 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:05.256 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:05.515 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:08.044 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:13.403 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:13.403 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:13.403 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.403 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:13.403 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:13.403 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:13.403 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.403 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.403 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.403 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:13.403 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:13.403 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:13.403 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:13.404 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:13.404 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:13.404 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:13.404 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:13.404 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:13.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:20:13.404 00:20:13.404 --- 10.0.0.2 ping statistics --- 00:20:13.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.404 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:20:13.405 00:20:13.405 --- 10.0.0.1 ping statistics --- 00:20:13.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.405 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1061025 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1061025 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1061025 ']' 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:13.405 [2024-11-15 12:41:53.277328] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:20:13.405 [2024-11-15 12:41:53.277404] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.405 [2024-11-15 12:41:53.356309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:13.405 [2024-11-15 12:41:53.418205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.405 [2024-11-15 12:41:53.418273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.405 [2024-11-15 12:41:53.418286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.405 [2024-11-15 12:41:53.418297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.405 [2024-11-15 12:41:53.418306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.405 [2024-11-15 12:41:53.419902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.405 [2024-11-15 12:41:53.419930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.405 [2024-11-15 12:41:53.419989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:13.405 [2024-11-15 12:41:53.419993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:13.405 [2024-11-15 12:41:53.683260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:13.405 Malloc1 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.405 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:13.663 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.663 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:13.663 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.663 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:13.663 [2024-11-15 12:41:53.755307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.663 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.663 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1061166 00:20:13.663 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:13.663 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:15.575 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:15.575 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.575 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.575 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.575 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:15.575 "tick_rate": 2700000000, 00:20:15.575 "poll_groups": [ 00:20:15.575 { 00:20:15.575 "name": "nvmf_tgt_poll_group_000", 00:20:15.575 "admin_qpairs": 1, 00:20:15.575 "io_qpairs": 1, 00:20:15.575 "current_admin_qpairs": 1, 00:20:15.575 "current_io_qpairs": 1, 00:20:15.575 "pending_bdev_io": 0, 00:20:15.575 "completed_nvme_io": 19879, 00:20:15.575 "transports": [ 00:20:15.575 { 00:20:15.575 "trtype": "TCP" 00:20:15.575 } 00:20:15.575 ] 00:20:15.575 }, 00:20:15.575 { 00:20:15.575 "name": "nvmf_tgt_poll_group_001", 00:20:15.575 "admin_qpairs": 0, 00:20:15.575 "io_qpairs": 1, 00:20:15.575 "current_admin_qpairs": 0, 00:20:15.575 "current_io_qpairs": 1, 00:20:15.575 "pending_bdev_io": 0, 00:20:15.575 "completed_nvme_io": 20092, 00:20:15.575 "transports": [ 00:20:15.575 { 00:20:15.575 "trtype": "TCP" 00:20:15.575 } 00:20:15.575 ] 00:20:15.575 }, 00:20:15.575 { 00:20:15.575 "name": "nvmf_tgt_poll_group_002", 00:20:15.575 "admin_qpairs": 0, 00:20:15.575 "io_qpairs": 1, 00:20:15.575 "current_admin_qpairs": 0, 00:20:15.575 "current_io_qpairs": 1, 00:20:15.575 "pending_bdev_io": 0, 00:20:15.575 "completed_nvme_io": 19803, 00:20:15.575 "transports": [ 00:20:15.575 { 00:20:15.575 "trtype": "TCP" 00:20:15.575 } 00:20:15.575 ] 00:20:15.575 }, 00:20:15.575 { 00:20:15.575 "name": "nvmf_tgt_poll_group_003", 00:20:15.575 "admin_qpairs": 0, 00:20:15.575 "io_qpairs": 1, 00:20:15.575 "current_admin_qpairs": 0, 00:20:15.575 "current_io_qpairs": 1, 00:20:15.575 "pending_bdev_io": 0, 00:20:15.575 "completed_nvme_io": 19669, 00:20:15.575 "transports": [ 00:20:15.575 { 00:20:15.575 "trtype": "TCP" 00:20:15.575 } 00:20:15.575 ] 00:20:15.575 } 00:20:15.575 ] 00:20:15.575 }' 00:20:15.575 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:15.575 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:15.575 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:15.575 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:15.576 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1061166 00:20:23.681 Initializing NVMe Controllers 00:20:23.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:23.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:23.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:23.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:23.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:23.681 Initialization complete. Launching workers. 00:20:23.681 ======================================================== 00:20:23.681 Latency(us) 00:20:23.681 Device Information : IOPS MiB/s Average min max 00:20:23.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10326.40 40.34 6199.18 2273.22 10729.12 00:20:23.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10599.80 41.41 6037.89 2567.14 9928.57 00:20:23.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10367.50 40.50 6174.59 2349.39 10626.90 00:20:23.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10397.20 40.61 6156.53 2636.68 10267.40 00:20:23.681 ======================================================== 00:20:23.681 Total : 41690.90 162.86 6141.42 2273.22 10729.12 00:20:23.681 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:23.681 rmmod nvme_tcp 00:20:23.681 rmmod nvme_fabrics 00:20:23.681 rmmod nvme_keyring 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1061025 ']' 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1061025 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1061025 ']' 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1061025 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1061025 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1061025' 00:20:23.681 killing process with pid 1061025 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1061025 00:20:23.681 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1061025 00:20:23.941 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:23.941 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:23.941 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:23.941 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:23.941 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:23.941 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:23.941 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:23.941 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:23.941 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:23.941 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.941 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.941 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.482 12:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:26.482 12:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:26.482 12:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:26.482 12:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:26.741 12:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:29.270 12:42:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:34.546 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.546 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:34.547 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:34.547 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:34.547 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:34.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:20:34.547 00:20:34.547 --- 10.0.0.2 ping statistics --- 00:20:34.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.547 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:20:34.547 00:20:34.547 --- 10.0.0.1 ping statistics --- 00:20:34.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.547 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:34.547 net.core.busy_poll = 1 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:34.547 net.core.busy_read = 1 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1064402 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1064402 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1064402 ']' 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.547 [2024-11-15 12:42:14.559440] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:20:34.547 [2024-11-15 12:42:14.559530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.547 [2024-11-15 12:42:14.631242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:34.547 [2024-11-15 12:42:14.688694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.547 [2024-11-15 12:42:14.688763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.547 [2024-11-15 12:42:14.688778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.547 [2024-11-15 12:42:14.688804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.547 [2024-11-15 12:42:14.688814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.547 [2024-11-15 12:42:14.690272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.547 [2024-11-15 12:42:14.690336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.547 [2024-11-15 12:42:14.690406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.547 [2024-11-15 12:42:14.690409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.547 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.804 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.804 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:34.804 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.804 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.804 [2024-11-15 12:42:14.958849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.804 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.804 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:34.804 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.804 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.804 Malloc1 00:20:34.804 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.804 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:34.804 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.804 12:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.804 12:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.805 12:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:34.805 12:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.805 12:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.805 12:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.805 12:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:34.805 12:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.805 12:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.805 [2024-11-15 12:42:15.020845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.805 12:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.805 12:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1064549 00:20:34.805 12:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:34.805 12:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:36.704 12:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:36.704 12:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.704 12:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.704 12:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.704 12:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:36.704 "tick_rate": 2700000000, 00:20:36.704 "poll_groups": [ 00:20:36.704 { 00:20:36.704 "name": "nvmf_tgt_poll_group_000", 00:20:36.704 "admin_qpairs": 1, 00:20:36.704 "io_qpairs": 1, 00:20:36.704 "current_admin_qpairs": 1, 00:20:36.704 "current_io_qpairs": 1, 00:20:36.704 "pending_bdev_io": 0, 00:20:36.704 "completed_nvme_io": 25235, 00:20:36.704 "transports": [ 00:20:36.704 { 00:20:36.704 "trtype": "TCP" 00:20:36.704 } 00:20:36.704 ] 00:20:36.704 }, 00:20:36.704 { 00:20:36.704 "name": "nvmf_tgt_poll_group_001", 00:20:36.704 "admin_qpairs": 0, 00:20:36.704 "io_qpairs": 3, 00:20:36.704 "current_admin_qpairs": 0, 00:20:36.704 "current_io_qpairs": 3, 00:20:36.704 "pending_bdev_io": 0, 00:20:36.704 "completed_nvme_io": 25921, 00:20:36.704 "transports": [ 00:20:36.704 { 00:20:36.704 "trtype": "TCP" 00:20:36.704 } 00:20:36.704 ] 00:20:36.704 }, 00:20:36.704 { 00:20:36.704 "name": "nvmf_tgt_poll_group_002", 00:20:36.704 "admin_qpairs": 0, 00:20:36.704 "io_qpairs": 0, 00:20:36.704 "current_admin_qpairs": 0, 00:20:36.704 "current_io_qpairs": 0, 00:20:36.704 "pending_bdev_io": 0, 00:20:36.704 "completed_nvme_io": 0, 00:20:36.704 "transports": [ 00:20:36.704 { 00:20:36.704 "trtype": "TCP" 00:20:36.704 } 00:20:36.704 ] 00:20:36.704 }, 00:20:36.704 { 00:20:36.704 "name": "nvmf_tgt_poll_group_003", 00:20:36.704 "admin_qpairs": 0, 00:20:36.704 "io_qpairs": 0, 00:20:36.704 "current_admin_qpairs": 0, 00:20:36.704 "current_io_qpairs": 0, 00:20:36.704 "pending_bdev_io": 0, 00:20:36.704 "completed_nvme_io": 0, 00:20:36.704 "transports": [ 00:20:36.704 { 00:20:36.704 "trtype": "TCP" 00:20:36.704 } 00:20:36.704 ] 00:20:36.704 } 00:20:36.704 ] 00:20:36.704 }' 00:20:36.704 12:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:36.704 12:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:36.962 12:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:36.962 12:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:36.962 12:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1064549 00:20:45.072 Initializing NVMe Controllers 00:20:45.072 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:45.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:45.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:45.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:45.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:45.072 Initialization complete. Launching workers. 00:20:45.072 ======================================================== 00:20:45.072 Latency(us) 00:20:45.072 Device Information : IOPS MiB/s Average min max 00:20:45.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13514.24 52.79 4736.38 1684.72 46450.53 00:20:45.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5197.84 20.30 12316.20 2044.58 61026.39 00:20:45.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4050.05 15.82 15807.29 2108.42 62656.43 00:20:45.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4352.55 17.00 14705.72 1897.27 63437.72 00:20:45.072 ======================================================== 00:20:45.072 Total : 27114.68 105.92 9443.37 1684.72 63437.72 00:20:45.072 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:45.072 rmmod nvme_tcp 00:20:45.072 rmmod nvme_fabrics 00:20:45.072 rmmod nvme_keyring 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1064402 ']' 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1064402 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1064402 ']' 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1064402 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1064402 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1064402' 00:20:45.072 killing process with pid 1064402 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1064402 00:20:45.072 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1064402 00:20:45.331 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:45.331 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:45.331 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:45.331 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:45.331 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:45.331 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:45.331 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:45.331 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:45.331 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:45.331 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.331 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.331 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.624 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:48.624 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:48.624 00:20:48.624 real 0m45.866s 00:20:48.624 user 2m41.014s 00:20:48.624 sys 0m8.900s 00:20:48.624 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.624 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:48.624 ************************************ 00:20:48.624 END TEST nvmf_perf_adq 00:20:48.624 ************************************ 00:20:48.624 12:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:48.624 12:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:48.624 12:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.624 12:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:48.624 ************************************ 00:20:48.624 START TEST nvmf_shutdown 00:20:48.624 ************************************ 00:20:48.624 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:48.624 * Looking for test storage... 00:20:48.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:48.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.625 --rc genhtml_branch_coverage=1 00:20:48.625 --rc genhtml_function_coverage=1 00:20:48.625 --rc genhtml_legend=1 00:20:48.625 --rc geninfo_all_blocks=1 00:20:48.625 --rc geninfo_unexecuted_blocks=1 00:20:48.625 00:20:48.625 ' 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:48.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.625 --rc genhtml_branch_coverage=1 00:20:48.625 --rc genhtml_function_coverage=1 00:20:48.625 --rc genhtml_legend=1 00:20:48.625 --rc geninfo_all_blocks=1 00:20:48.625 --rc geninfo_unexecuted_blocks=1 00:20:48.625 00:20:48.625 ' 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:48.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.625 --rc genhtml_branch_coverage=1 00:20:48.625 --rc genhtml_function_coverage=1 00:20:48.625 --rc genhtml_legend=1 00:20:48.625 --rc geninfo_all_blocks=1 00:20:48.625 --rc geninfo_unexecuted_blocks=1 00:20:48.625 00:20:48.625 ' 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:48.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.625 --rc genhtml_branch_coverage=1 00:20:48.625 --rc genhtml_function_coverage=1 00:20:48.625 --rc genhtml_legend=1 00:20:48.625 --rc geninfo_all_blocks=1 00:20:48.625 --rc geninfo_unexecuted_blocks=1 00:20:48.625 00:20:48.625 ' 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:48.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:48.625 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:48.626 ************************************ 00:20:48.626 START TEST nvmf_shutdown_tc1 00:20:48.626 ************************************ 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:48.626 12:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.157 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:51.157 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:51.157 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:51.157 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:51.157 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:51.157 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:51.157 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:51.157 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:51.157 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:51.157 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:51.157 12:42:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:51.157 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:51.157 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:51.157 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:51.157 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:51.157 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:51.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:20:51.158 00:20:51.158 --- 10.0.0.2 ping statistics --- 00:20:51.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.158 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:51.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:20:51.158 00:20:51.158 --- 10.0.0.1 ping statistics --- 00:20:51.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.158 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1067850 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1067850 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1067850 ']' 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.158 [2024-11-15 12:42:31.227298] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:20:51.158 [2024-11-15 12:42:31.227378] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.158 [2024-11-15 12:42:31.300516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:51.158 [2024-11-15 12:42:31.361799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.158 [2024-11-15 12:42:31.361848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.158 [2024-11-15 12:42:31.361878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.158 [2024-11-15 12:42:31.361891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.158 [2024-11-15 12:42:31.361901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.158 [2024-11-15 12:42:31.363487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.158 [2024-11-15 12:42:31.363577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:51.158 [2024-11-15 12:42:31.363686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:51.158 [2024-11-15 12:42:31.363690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.158 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.416 [2024-11-15 12:42:31.501960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.416 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.416 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:51.416 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:51.416 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.416 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.416 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:51.416 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.416 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.417 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.417 Malloc1 00:20:51.417 [2024-11-15 12:42:31.593849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.417 Malloc2 00:20:51.417 Malloc3 00:20:51.417 Malloc4 00:20:51.417 Malloc5 00:20:51.675 Malloc6 00:20:51.675 Malloc7 00:20:51.675 Malloc8 00:20:51.675 Malloc9 00:20:51.675 Malloc10 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1068026 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1068026 /var/tmp/bdevperf.sock 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1068026 ']' 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.934 { 00:20:51.934 "params": { 00:20:51.934 "name": "Nvme$subsystem", 00:20:51.934 "trtype": "$TEST_TRANSPORT", 00:20:51.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.934 "adrfam": "ipv4", 00:20:51.934 "trsvcid": "$NVMF_PORT", 00:20:51.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.934 "hdgst": ${hdgst:-false}, 00:20:51.934 "ddgst": ${ddgst:-false} 00:20:51.934 }, 00:20:51.934 "method": "bdev_nvme_attach_controller" 00:20:51.934 } 00:20:51.934 EOF 00:20:51.934 )") 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.934 { 00:20:51.934 "params": { 00:20:51.934 "name": "Nvme$subsystem", 00:20:51.934 "trtype": "$TEST_TRANSPORT", 00:20:51.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.934 "adrfam": "ipv4", 00:20:51.934 "trsvcid": "$NVMF_PORT", 00:20:51.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.934 "hdgst": ${hdgst:-false}, 00:20:51.934 "ddgst": ${ddgst:-false} 00:20:51.934 }, 00:20:51.934 "method": "bdev_nvme_attach_controller" 00:20:51.934 } 00:20:51.934 EOF 00:20:51.934 )") 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.934 { 00:20:51.934 "params": { 00:20:51.934 "name": "Nvme$subsystem", 00:20:51.934 "trtype": "$TEST_TRANSPORT", 00:20:51.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.934 "adrfam": "ipv4", 00:20:51.934 "trsvcid": "$NVMF_PORT", 00:20:51.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.934 "hdgst": ${hdgst:-false}, 00:20:51.934 "ddgst": ${ddgst:-false} 00:20:51.934 }, 00:20:51.934 "method": "bdev_nvme_attach_controller" 00:20:51.934 } 00:20:51.934 EOF 00:20:51.934 )") 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.934 { 00:20:51.934 "params": { 00:20:51.934 "name": "Nvme$subsystem", 00:20:51.934 "trtype": "$TEST_TRANSPORT", 00:20:51.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.934 "adrfam": "ipv4", 00:20:51.934 "trsvcid": "$NVMF_PORT", 00:20:51.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.934 "hdgst": ${hdgst:-false}, 00:20:51.934 "ddgst": ${ddgst:-false} 00:20:51.934 }, 00:20:51.934 "method": "bdev_nvme_attach_controller" 00:20:51.934 } 00:20:51.934 EOF 00:20:51.934 )") 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.934 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.934 { 00:20:51.934 "params": { 00:20:51.934 "name": "Nvme$subsystem", 00:20:51.934 "trtype": "$TEST_TRANSPORT", 00:20:51.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.935 "adrfam": "ipv4", 00:20:51.935 "trsvcid": "$NVMF_PORT", 00:20:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.935 "hdgst": ${hdgst:-false}, 00:20:51.935 "ddgst": ${ddgst:-false} 00:20:51.935 }, 00:20:51.935 "method": "bdev_nvme_attach_controller" 00:20:51.935 } 00:20:51.935 EOF 00:20:51.935 )") 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.935 { 00:20:51.935 "params": { 00:20:51.935 "name": "Nvme$subsystem", 00:20:51.935 "trtype": "$TEST_TRANSPORT", 00:20:51.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.935 "adrfam": "ipv4", 00:20:51.935 "trsvcid": "$NVMF_PORT", 00:20:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.935 "hdgst": ${hdgst:-false}, 00:20:51.935 "ddgst": ${ddgst:-false} 00:20:51.935 }, 00:20:51.935 "method": "bdev_nvme_attach_controller" 00:20:51.935 } 00:20:51.935 EOF 00:20:51.935 )") 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.935 { 00:20:51.935 "params": { 00:20:51.935 "name": "Nvme$subsystem", 00:20:51.935 "trtype": "$TEST_TRANSPORT", 00:20:51.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.935 "adrfam": "ipv4", 00:20:51.935 "trsvcid": "$NVMF_PORT", 00:20:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.935 "hdgst": ${hdgst:-false}, 00:20:51.935 "ddgst": ${ddgst:-false} 00:20:51.935 }, 00:20:51.935 "method": "bdev_nvme_attach_controller" 00:20:51.935 } 00:20:51.935 EOF 00:20:51.935 )") 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.935 { 00:20:51.935 "params": { 00:20:51.935 "name": "Nvme$subsystem", 00:20:51.935 "trtype": "$TEST_TRANSPORT", 00:20:51.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.935 "adrfam": "ipv4", 00:20:51.935 "trsvcid": "$NVMF_PORT", 00:20:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.935 "hdgst": ${hdgst:-false}, 00:20:51.935 "ddgst": ${ddgst:-false} 00:20:51.935 }, 00:20:51.935 "method": "bdev_nvme_attach_controller" 00:20:51.935 } 00:20:51.935 EOF 00:20:51.935 )") 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.935 { 00:20:51.935 "params": { 00:20:51.935 "name": "Nvme$subsystem", 00:20:51.935 "trtype": "$TEST_TRANSPORT", 00:20:51.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.935 "adrfam": "ipv4", 00:20:51.935 "trsvcid": "$NVMF_PORT", 00:20:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.935 "hdgst": ${hdgst:-false}, 00:20:51.935 "ddgst": ${ddgst:-false} 00:20:51.935 }, 00:20:51.935 "method": "bdev_nvme_attach_controller" 00:20:51.935 } 00:20:51.935 EOF 00:20:51.935 )") 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.935 { 00:20:51.935 "params": { 00:20:51.935 "name": "Nvme$subsystem", 00:20:51.935 "trtype": "$TEST_TRANSPORT", 00:20:51.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.935 "adrfam": "ipv4", 00:20:51.935 "trsvcid": "$NVMF_PORT", 00:20:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.935 "hdgst": ${hdgst:-false}, 00:20:51.935 "ddgst": ${ddgst:-false} 00:20:51.935 }, 00:20:51.935 "method": "bdev_nvme_attach_controller" 00:20:51.935 } 00:20:51.935 EOF 00:20:51.935 )") 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:51.935 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:51.935 "params": { 00:20:51.935 "name": "Nvme1", 00:20:51.935 "trtype": "tcp", 00:20:51.935 "traddr": "10.0.0.2", 00:20:51.935 "adrfam": "ipv4", 00:20:51.935 "trsvcid": "4420", 00:20:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.935 "hdgst": false, 00:20:51.935 "ddgst": false 00:20:51.935 }, 00:20:51.935 "method": "bdev_nvme_attach_controller" 00:20:51.935 },{ 00:20:51.935 "params": { 00:20:51.935 "name": "Nvme2", 00:20:51.935 "trtype": "tcp", 00:20:51.935 "traddr": "10.0.0.2", 00:20:51.935 "adrfam": "ipv4", 00:20:51.935 "trsvcid": "4420", 00:20:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:51.935 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:51.935 "hdgst": false, 00:20:51.935 "ddgst": false 00:20:51.935 }, 00:20:51.935 "method": "bdev_nvme_attach_controller" 00:20:51.935 },{ 00:20:51.935 "params": { 00:20:51.935 "name": "Nvme3", 00:20:51.935 "trtype": "tcp", 00:20:51.935 "traddr": "10.0.0.2", 00:20:51.935 "adrfam": "ipv4", 00:20:51.935 "trsvcid": "4420", 00:20:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:51.935 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:51.935 "hdgst": false, 00:20:51.935 "ddgst": false 00:20:51.935 }, 00:20:51.935 "method": "bdev_nvme_attach_controller" 00:20:51.935 },{ 00:20:51.935 "params": { 00:20:51.935 "name": "Nvme4", 00:20:51.935 "trtype": "tcp", 00:20:51.935 "traddr": "10.0.0.2", 00:20:51.935 "adrfam": "ipv4", 00:20:51.935 "trsvcid": "4420", 00:20:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:51.935 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:51.935 "hdgst": false, 00:20:51.935 "ddgst": false 00:20:51.935 }, 00:20:51.935 "method": "bdev_nvme_attach_controller" 00:20:51.935 },{ 00:20:51.935 "params": { 00:20:51.935 "name": "Nvme5", 00:20:51.935 "trtype": "tcp", 00:20:51.935 "traddr": "10.0.0.2", 00:20:51.935 "adrfam": "ipv4", 00:20:51.935 "trsvcid": "4420", 00:20:51.935 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:51.935 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:51.935 "hdgst": false, 00:20:51.935 "ddgst": false 00:20:51.935 }, 00:20:51.935 "method": "bdev_nvme_attach_controller" 00:20:51.935 },{ 00:20:51.935 "params": { 00:20:51.935 "name": "Nvme6", 00:20:51.935 "trtype": "tcp", 00:20:51.935 "traddr": "10.0.0.2", 00:20:51.935 "adrfam": "ipv4", 00:20:51.936 "trsvcid": "4420", 00:20:51.936 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:51.936 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:51.936 "hdgst": false, 00:20:51.936 "ddgst": false 00:20:51.936 }, 00:20:51.936 "method": "bdev_nvme_attach_controller" 00:20:51.936 },{ 00:20:51.936 "params": { 00:20:51.936 "name": "Nvme7", 00:20:51.936 "trtype": "tcp", 00:20:51.936 "traddr": "10.0.0.2", 00:20:51.936 "adrfam": "ipv4", 00:20:51.936 "trsvcid": "4420", 00:20:51.936 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:51.936 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:51.936 "hdgst": false, 00:20:51.936 "ddgst": false 00:20:51.936 }, 00:20:51.936 "method": "bdev_nvme_attach_controller" 00:20:51.936 },{ 00:20:51.936 "params": { 00:20:51.936 "name": "Nvme8", 00:20:51.936 "trtype": "tcp", 00:20:51.936 "traddr": "10.0.0.2", 00:20:51.936 "adrfam": "ipv4", 00:20:51.936 "trsvcid": "4420", 00:20:51.936 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:51.936 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:51.936 "hdgst": false, 00:20:51.936 "ddgst": false 00:20:51.936 }, 00:20:51.936 "method": "bdev_nvme_attach_controller" 00:20:51.936 },{ 00:20:51.936 "params": { 00:20:51.936 "name": "Nvme9", 00:20:51.936 "trtype": "tcp", 00:20:51.936 "traddr": "10.0.0.2", 00:20:51.936 "adrfam": "ipv4", 00:20:51.936 "trsvcid": "4420", 00:20:51.936 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:51.936 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:51.936 "hdgst": false, 00:20:51.936 "ddgst": false 00:20:51.936 }, 00:20:51.936 "method": "bdev_nvme_attach_controller" 00:20:51.936 },{ 00:20:51.936 "params": { 00:20:51.936 "name": "Nvme10", 00:20:51.936 "trtype": "tcp", 00:20:51.936 "traddr": "10.0.0.2", 00:20:51.936 "adrfam": "ipv4", 00:20:51.936 "trsvcid": "4420", 00:20:51.936 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:51.936 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:51.936 "hdgst": false, 00:20:51.936 "ddgst": false 00:20:51.936 }, 00:20:51.936 "method": "bdev_nvme_attach_controller" 00:20:51.936 }' 00:20:51.936 [2024-11-15 12:42:32.105451] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:20:51.936 [2024-11-15 12:42:32.105524] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:51.936 [2024-11-15 12:42:32.177224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.936 [2024-11-15 12:42:32.236373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.834 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.834 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:53.834 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:53.834 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.834 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:53.835 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.835 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1068026 00:20:53.835 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:53.835 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:54.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1068026 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:54.766 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1067850 00:20:54.766 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:54.766 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:54.766 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:54.766 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:54.766 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.766 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.766 { 00:20:54.766 "params": { 00:20:54.766 "name": "Nvme$subsystem", 00:20:54.766 "trtype": "$TEST_TRANSPORT", 00:20:54.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.766 "adrfam": "ipv4", 00:20:54.766 "trsvcid": "$NVMF_PORT", 00:20:54.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.766 "hdgst": ${hdgst:-false}, 00:20:54.766 "ddgst": ${ddgst:-false} 00:20:54.766 }, 00:20:54.766 "method": "bdev_nvme_attach_controller" 00:20:54.766 } 00:20:54.766 EOF 00:20:54.766 )") 00:20:54.766 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:54.766 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.766 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.766 { 00:20:54.766 "params": { 00:20:54.766 "name": "Nvme$subsystem", 00:20:54.766 "trtype": "$TEST_TRANSPORT", 00:20:54.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.766 "adrfam": "ipv4", 00:20:54.766 "trsvcid": "$NVMF_PORT", 00:20:54.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.766 "hdgst": ${hdgst:-false}, 00:20:54.766 "ddgst": ${ddgst:-false} 00:20:54.766 }, 00:20:54.766 "method": "bdev_nvme_attach_controller" 00:20:54.766 } 00:20:54.766 EOF 00:20:54.766 )") 00:20:54.766 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:54.766 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.766 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.766 { 00:20:54.766 "params": { 00:20:54.766 "name": "Nvme$subsystem", 00:20:54.766 "trtype": "$TEST_TRANSPORT", 00:20:54.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.766 "adrfam": "ipv4", 00:20:54.766 "trsvcid": "$NVMF_PORT", 00:20:54.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.767 "hdgst": ${hdgst:-false}, 00:20:54.767 "ddgst": ${ddgst:-false} 00:20:54.767 }, 00:20:54.767 "method": "bdev_nvme_attach_controller" 00:20:54.767 } 00:20:54.767 EOF 00:20:54.767 )") 00:20:54.767 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:54.767 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.767 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.767 { 00:20:54.767 "params": { 00:20:54.767 "name": "Nvme$subsystem", 00:20:54.767 "trtype": "$TEST_TRANSPORT", 00:20:54.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.767 "adrfam": "ipv4", 00:20:54.767 "trsvcid": "$NVMF_PORT", 00:20:54.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.767 "hdgst": ${hdgst:-false}, 00:20:54.767 "ddgst": ${ddgst:-false} 00:20:54.767 }, 00:20:54.767 "method": "bdev_nvme_attach_controller" 00:20:54.767 } 00:20:54.767 EOF 00:20:54.767 )") 00:20:54.767 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:55.025 { 00:20:55.025 "params": { 00:20:55.025 "name": "Nvme$subsystem", 00:20:55.025 "trtype": "$TEST_TRANSPORT", 00:20:55.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.025 "adrfam": "ipv4", 00:20:55.025 "trsvcid": "$NVMF_PORT", 00:20:55.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.025 "hdgst": ${hdgst:-false}, 00:20:55.025 "ddgst": ${ddgst:-false} 00:20:55.025 }, 00:20:55.025 "method": "bdev_nvme_attach_controller" 00:20:55.025 } 00:20:55.025 EOF 00:20:55.025 )") 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:55.025 { 00:20:55.025 "params": { 00:20:55.025 "name": "Nvme$subsystem", 00:20:55.025 "trtype": "$TEST_TRANSPORT", 00:20:55.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.025 "adrfam": "ipv4", 00:20:55.025 "trsvcid": "$NVMF_PORT", 00:20:55.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.025 "hdgst": ${hdgst:-false}, 00:20:55.025 "ddgst": ${ddgst:-false} 00:20:55.025 }, 00:20:55.025 "method": "bdev_nvme_attach_controller" 00:20:55.025 } 00:20:55.025 EOF 00:20:55.025 )") 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:55.025 { 00:20:55.025 "params": { 00:20:55.025 "name": "Nvme$subsystem", 00:20:55.025 "trtype": "$TEST_TRANSPORT", 00:20:55.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.025 "adrfam": "ipv4", 00:20:55.025 "trsvcid": "$NVMF_PORT", 00:20:55.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.025 "hdgst": ${hdgst:-false}, 00:20:55.025 "ddgst": ${ddgst:-false} 00:20:55.025 }, 00:20:55.025 "method": "bdev_nvme_attach_controller" 00:20:55.025 } 00:20:55.025 EOF 00:20:55.025 )") 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:55.025 { 00:20:55.025 "params": { 00:20:55.025 "name": "Nvme$subsystem", 00:20:55.025 "trtype": "$TEST_TRANSPORT", 00:20:55.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.025 "adrfam": "ipv4", 00:20:55.025 "trsvcid": "$NVMF_PORT", 00:20:55.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.025 "hdgst": ${hdgst:-false}, 00:20:55.025 "ddgst": ${ddgst:-false} 00:20:55.025 }, 00:20:55.025 "method": "bdev_nvme_attach_controller" 00:20:55.025 } 00:20:55.025 EOF 00:20:55.025 )") 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:55.025 { 00:20:55.025 "params": { 00:20:55.025 "name": "Nvme$subsystem", 00:20:55.025 "trtype": "$TEST_TRANSPORT", 00:20:55.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.025 "adrfam": "ipv4", 00:20:55.025 "trsvcid": "$NVMF_PORT", 00:20:55.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.025 "hdgst": ${hdgst:-false}, 00:20:55.025 "ddgst": ${ddgst:-false} 00:20:55.025 }, 00:20:55.025 "method": "bdev_nvme_attach_controller" 00:20:55.025 } 00:20:55.025 EOF 00:20:55.025 )") 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:55.025 { 00:20:55.025 "params": { 00:20:55.025 "name": "Nvme$subsystem", 00:20:55.025 "trtype": "$TEST_TRANSPORT", 00:20:55.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.025 "adrfam": "ipv4", 00:20:55.025 "trsvcid": "$NVMF_PORT", 00:20:55.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.025 "hdgst": ${hdgst:-false}, 00:20:55.025 "ddgst": ${ddgst:-false} 00:20:55.025 }, 00:20:55.025 "method": "bdev_nvme_attach_controller" 00:20:55.025 } 00:20:55.025 EOF 00:20:55.025 )") 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:55.025 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:55.026 12:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:55.026 "params": { 00:20:55.026 "name": "Nvme1", 00:20:55.026 "trtype": "tcp", 00:20:55.026 "traddr": "10.0.0.2", 00:20:55.026 "adrfam": "ipv4", 00:20:55.026 "trsvcid": "4420", 00:20:55.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:55.026 "hdgst": false, 00:20:55.026 "ddgst": false 00:20:55.026 }, 00:20:55.026 "method": "bdev_nvme_attach_controller" 00:20:55.026 },{ 00:20:55.026 "params": { 00:20:55.026 "name": "Nvme2", 00:20:55.026 "trtype": "tcp", 00:20:55.026 "traddr": "10.0.0.2", 00:20:55.026 "adrfam": "ipv4", 00:20:55.026 "trsvcid": "4420", 00:20:55.026 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:55.026 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:55.026 "hdgst": false, 00:20:55.026 "ddgst": false 00:20:55.026 }, 00:20:55.026 "method": "bdev_nvme_attach_controller" 00:20:55.026 },{ 00:20:55.026 "params": { 00:20:55.026 "name": "Nvme3", 00:20:55.026 "trtype": "tcp", 00:20:55.026 "traddr": "10.0.0.2", 00:20:55.026 "adrfam": "ipv4", 00:20:55.026 "trsvcid": "4420", 00:20:55.026 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:55.026 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:55.026 "hdgst": false, 00:20:55.026 "ddgst": false 00:20:55.026 }, 00:20:55.026 "method": "bdev_nvme_attach_controller" 00:20:55.026 },{ 00:20:55.026 "params": { 00:20:55.026 "name": "Nvme4", 00:20:55.026 "trtype": "tcp", 00:20:55.026 "traddr": "10.0.0.2", 00:20:55.026 "adrfam": "ipv4", 00:20:55.026 "trsvcid": "4420", 00:20:55.026 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:55.026 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:55.026 "hdgst": false, 00:20:55.026 "ddgst": false 00:20:55.026 }, 00:20:55.026 "method": "bdev_nvme_attach_controller" 00:20:55.026 },{ 00:20:55.026 "params": { 00:20:55.026 "name": "Nvme5", 00:20:55.026 "trtype": "tcp", 00:20:55.026 "traddr": "10.0.0.2", 00:20:55.026 "adrfam": "ipv4", 00:20:55.026 "trsvcid": "4420", 00:20:55.026 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:55.026 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:55.026 "hdgst": false, 00:20:55.026 "ddgst": false 00:20:55.026 }, 00:20:55.026 "method": "bdev_nvme_attach_controller" 00:20:55.026 },{ 00:20:55.026 "params": { 00:20:55.026 "name": "Nvme6", 00:20:55.026 "trtype": "tcp", 00:20:55.026 "traddr": "10.0.0.2", 00:20:55.026 "adrfam": "ipv4", 00:20:55.026 "trsvcid": "4420", 00:20:55.026 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:55.026 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:55.026 "hdgst": false, 00:20:55.026 "ddgst": false 00:20:55.026 }, 00:20:55.026 "method": "bdev_nvme_attach_controller" 00:20:55.026 },{ 00:20:55.026 "params": { 00:20:55.026 "name": "Nvme7", 00:20:55.026 "trtype": "tcp", 00:20:55.026 "traddr": "10.0.0.2", 00:20:55.026 "adrfam": "ipv4", 00:20:55.026 "trsvcid": "4420", 00:20:55.026 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:55.026 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:55.026 "hdgst": false, 00:20:55.026 "ddgst": false 00:20:55.026 }, 00:20:55.026 "method": "bdev_nvme_attach_controller" 00:20:55.026 },{ 00:20:55.026 "params": { 00:20:55.026 "name": "Nvme8", 00:20:55.026 "trtype": "tcp", 00:20:55.026 "traddr": "10.0.0.2", 00:20:55.026 "adrfam": "ipv4", 00:20:55.026 "trsvcid": "4420", 00:20:55.026 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:55.026 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:55.026 "hdgst": false, 00:20:55.026 "ddgst": false 00:20:55.026 }, 00:20:55.026 "method": "bdev_nvme_attach_controller" 00:20:55.026 },{ 00:20:55.026 "params": { 00:20:55.026 "name": "Nvme9", 00:20:55.026 "trtype": "tcp", 00:20:55.026 "traddr": "10.0.0.2", 00:20:55.026 "adrfam": "ipv4", 00:20:55.026 "trsvcid": "4420", 00:20:55.026 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:55.026 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:55.026 "hdgst": false, 00:20:55.026 "ddgst": false 00:20:55.026 }, 00:20:55.026 "method": "bdev_nvme_attach_controller" 00:20:55.026 },{ 00:20:55.026 "params": { 00:20:55.026 "name": "Nvme10", 00:20:55.026 "trtype": "tcp", 00:20:55.026 "traddr": "10.0.0.2", 00:20:55.026 "adrfam": "ipv4", 00:20:55.026 "trsvcid": "4420", 00:20:55.026 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:55.026 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:55.026 "hdgst": false, 00:20:55.026 "ddgst": false 00:20:55.026 }, 00:20:55.026 "method": "bdev_nvme_attach_controller" 00:20:55.026 }' 00:20:55.026 [2024-11-15 12:42:35.143175] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:20:55.026 [2024-11-15 12:42:35.143255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1068332 ] 00:20:55.026 [2024-11-15 12:42:35.215950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.026 [2024-11-15 12:42:35.276175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.399 Running I/O for 1 seconds... 00:20:57.773 1728.00 IOPS, 108.00 MiB/s 00:20:57.773 Latency(us) 00:20:57.773 [2024-11-15T11:42:38.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.773 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.773 Verification LBA range: start 0x0 length 0x400 00:20:57.773 Nvme1n1 : 1.19 215.52 13.47 0.00 0.00 294278.83 18641.35 267192.70 00:20:57.773 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.773 Verification LBA range: start 0x0 length 0x400 00:20:57.773 Nvme2n1 : 1.17 217.96 13.62 0.00 0.00 286281.77 20680.25 256318.58 00:20:57.773 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.773 Verification LBA range: start 0x0 length 0x400 00:20:57.773 Nvme3n1 : 1.17 219.70 13.73 0.00 0.00 278055.44 20291.89 270299.59 00:20:57.773 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.773 Verification LBA range: start 0x0 length 0x400 00:20:57.773 Nvme4n1 : 1.17 219.53 13.72 0.00 0.00 274524.16 18252.99 262532.36 00:20:57.773 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.773 Verification LBA range: start 0x0 length 0x400 00:20:57.773 Nvme5n1 : 1.19 214.32 13.40 0.00 0.00 277571.89 23884.23 267192.70 00:20:57.773 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.773 Verification LBA range: start 0x0 length 0x400 00:20:57.773 Nvme6n1 : 1.20 213.45 13.34 0.00 0.00 274248.82 20971.52 282727.16 00:20:57.773 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.773 Verification LBA range: start 0x0 length 0x400 00:20:57.773 Nvme7n1 : 1.21 264.51 16.53 0.00 0.00 217608.95 35146.71 256318.58 00:20:57.773 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.773 Verification LBA range: start 0x0 length 0x400 00:20:57.773 Nvme8n1 : 1.21 263.99 16.50 0.00 0.00 214450.63 17573.36 274959.93 00:20:57.773 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.773 Verification LBA range: start 0x0 length 0x400 00:20:57.773 Nvme9n1 : 1.20 212.60 13.29 0.00 0.00 261723.02 22622.06 284280.60 00:20:57.773 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.773 Verification LBA range: start 0x0 length 0x400 00:20:57.773 Nvme10n1 : 1.21 212.01 13.25 0.00 0.00 258275.18 22233.69 276513.37 00:20:57.773 [2024-11-15T11:42:38.117Z] =================================================================================================================== 00:20:57.773 [2024-11-15T11:42:38.117Z] Total : 2253.59 140.85 0.00 0.00 261431.77 17573.36 284280.60 00:20:58.030 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:58.030 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:58.030 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:58.031 rmmod nvme_tcp 00:20:58.031 rmmod nvme_fabrics 00:20:58.031 rmmod nvme_keyring 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1067850 ']' 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1067850 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1067850 ']' 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1067850 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1067850 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1067850' 00:20:58.031 killing process with pid 1067850 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1067850 00:20:58.031 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1067850 00:20:58.598 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:58.598 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:58.598 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:58.598 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:58.598 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:58.598 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:58.598 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:58.598 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:58.598 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:58.598 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.598 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.598 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.504 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:00.504 00:21:00.504 real 0m11.969s 00:21:00.504 user 0m34.351s 00:21:00.504 sys 0m3.308s 00:21:00.504 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.504 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.504 ************************************ 00:21:00.504 END TEST nvmf_shutdown_tc1 00:21:00.504 ************************************ 00:21:00.504 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:00.504 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:00.504 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.504 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:00.765 ************************************ 00:21:00.765 START TEST nvmf_shutdown_tc2 00:21:00.765 ************************************ 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:00.765 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:00.765 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:00.765 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:00.765 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:00.766 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:00.766 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:00.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:21:00.766 00:21:00.766 --- 10.0.0.2 ping statistics --- 00:21:00.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.766 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:21:00.766 00:21:00.766 --- 10.0.0.1 ping statistics --- 00:21:00.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.766 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1069212 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1069212 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1069212 ']' 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.766 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.766 [2024-11-15 12:42:41.104255] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:21:00.766 [2024-11-15 12:42:41.104353] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.025 [2024-11-15 12:42:41.174846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:01.025 [2024-11-15 12:42:41.229037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.025 [2024-11-15 12:42:41.229092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.025 [2024-11-15 12:42:41.229119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.025 [2024-11-15 12:42:41.229129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.025 [2024-11-15 12:42:41.229138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.025 [2024-11-15 12:42:41.230575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.025 [2024-11-15 12:42:41.230633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:01.025 [2024-11-15 12:42:41.230701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:01.025 [2024-11-15 12:42:41.230703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.025 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.025 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:01.025 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:01.025 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.025 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:01.283 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:01.284 [2024-11-15 12:42:41.378367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.284 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:01.284 Malloc1 00:21:01.284 [2024-11-15 12:42:41.478853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.284 Malloc2 00:21:01.284 Malloc3 00:21:01.284 Malloc4 00:21:01.542 Malloc5 00:21:01.542 Malloc6 00:21:01.542 Malloc7 00:21:01.542 Malloc8 00:21:01.542 Malloc9 00:21:01.801 Malloc10 00:21:01.801 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.801 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:01.801 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.801 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:01.801 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1069280 00:21:01.801 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1069280 /var/tmp/bdevperf.sock 00:21:01.801 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1069280 ']' 00:21:01.801 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.801 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:01.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.802 { 00:21:01.802 "params": { 00:21:01.802 "name": "Nvme$subsystem", 00:21:01.802 "trtype": "$TEST_TRANSPORT", 00:21:01.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.802 "adrfam": "ipv4", 00:21:01.802 "trsvcid": "$NVMF_PORT", 00:21:01.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.802 "hdgst": ${hdgst:-false}, 00:21:01.802 "ddgst": ${ddgst:-false} 00:21:01.802 }, 00:21:01.802 "method": "bdev_nvme_attach_controller" 00:21:01.802 } 00:21:01.802 EOF 00:21:01.802 )") 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.802 { 00:21:01.802 "params": { 00:21:01.802 "name": "Nvme$subsystem", 00:21:01.802 "trtype": "$TEST_TRANSPORT", 00:21:01.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.802 "adrfam": "ipv4", 00:21:01.802 "trsvcid": "$NVMF_PORT", 00:21:01.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.802 "hdgst": ${hdgst:-false}, 00:21:01.802 "ddgst": ${ddgst:-false} 00:21:01.802 }, 00:21:01.802 "method": "bdev_nvme_attach_controller" 00:21:01.802 } 00:21:01.802 EOF 00:21:01.802 )") 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.802 { 00:21:01.802 "params": { 00:21:01.802 "name": "Nvme$subsystem", 00:21:01.802 "trtype": "$TEST_TRANSPORT", 00:21:01.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.802 "adrfam": "ipv4", 00:21:01.802 "trsvcid": "$NVMF_PORT", 00:21:01.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.802 "hdgst": ${hdgst:-false}, 00:21:01.802 "ddgst": ${ddgst:-false} 00:21:01.802 }, 00:21:01.802 "method": "bdev_nvme_attach_controller" 00:21:01.802 } 00:21:01.802 EOF 00:21:01.802 )") 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.802 { 00:21:01.802 "params": { 00:21:01.802 "name": "Nvme$subsystem", 00:21:01.802 "trtype": "$TEST_TRANSPORT", 00:21:01.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.802 "adrfam": "ipv4", 00:21:01.802 "trsvcid": "$NVMF_PORT", 00:21:01.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.802 "hdgst": ${hdgst:-false}, 00:21:01.802 "ddgst": ${ddgst:-false} 00:21:01.802 }, 00:21:01.802 "method": "bdev_nvme_attach_controller" 00:21:01.802 } 00:21:01.802 EOF 00:21:01.802 )") 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.802 { 00:21:01.802 "params": { 00:21:01.802 "name": "Nvme$subsystem", 00:21:01.802 "trtype": "$TEST_TRANSPORT", 00:21:01.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.802 "adrfam": "ipv4", 00:21:01.802 "trsvcid": "$NVMF_PORT", 00:21:01.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.802 "hdgst": ${hdgst:-false}, 00:21:01.802 "ddgst": ${ddgst:-false} 00:21:01.802 }, 00:21:01.802 "method": "bdev_nvme_attach_controller" 00:21:01.802 } 00:21:01.802 EOF 00:21:01.802 )") 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.802 { 00:21:01.802 "params": { 00:21:01.802 "name": "Nvme$subsystem", 00:21:01.802 "trtype": "$TEST_TRANSPORT", 00:21:01.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.802 "adrfam": "ipv4", 00:21:01.802 "trsvcid": "$NVMF_PORT", 00:21:01.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.802 "hdgst": ${hdgst:-false}, 00:21:01.802 "ddgst": ${ddgst:-false} 00:21:01.802 }, 00:21:01.802 "method": "bdev_nvme_attach_controller" 00:21:01.802 } 00:21:01.802 EOF 00:21:01.802 )") 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.802 { 00:21:01.802 "params": { 00:21:01.802 "name": "Nvme$subsystem", 00:21:01.802 "trtype": "$TEST_TRANSPORT", 00:21:01.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.802 "adrfam": "ipv4", 00:21:01.802 "trsvcid": "$NVMF_PORT", 00:21:01.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.802 "hdgst": ${hdgst:-false}, 00:21:01.802 "ddgst": ${ddgst:-false} 00:21:01.802 }, 00:21:01.802 "method": "bdev_nvme_attach_controller" 00:21:01.802 } 00:21:01.802 EOF 00:21:01.802 )") 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.802 { 00:21:01.802 "params": { 00:21:01.802 "name": "Nvme$subsystem", 00:21:01.802 "trtype": "$TEST_TRANSPORT", 00:21:01.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.802 "adrfam": "ipv4", 00:21:01.802 "trsvcid": "$NVMF_PORT", 00:21:01.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.802 "hdgst": ${hdgst:-false}, 00:21:01.802 "ddgst": ${ddgst:-false} 00:21:01.802 }, 00:21:01.802 "method": "bdev_nvme_attach_controller" 00:21:01.802 } 00:21:01.802 EOF 00:21:01.802 )") 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.802 { 00:21:01.802 "params": { 00:21:01.802 "name": "Nvme$subsystem", 00:21:01.802 "trtype": "$TEST_TRANSPORT", 00:21:01.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.802 "adrfam": "ipv4", 00:21:01.802 "trsvcid": "$NVMF_PORT", 00:21:01.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.802 "hdgst": ${hdgst:-false}, 00:21:01.802 "ddgst": ${ddgst:-false} 00:21:01.802 }, 00:21:01.802 "method": "bdev_nvme_attach_controller" 00:21:01.802 } 00:21:01.802 EOF 00:21:01.802 )") 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.802 { 00:21:01.802 "params": { 00:21:01.802 "name": "Nvme$subsystem", 00:21:01.802 "trtype": "$TEST_TRANSPORT", 00:21:01.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.802 "adrfam": "ipv4", 00:21:01.802 "trsvcid": "$NVMF_PORT", 00:21:01.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.802 "hdgst": ${hdgst:-false}, 00:21:01.802 "ddgst": ${ddgst:-false} 00:21:01.802 }, 00:21:01.802 "method": "bdev_nvme_attach_controller" 00:21:01.802 } 00:21:01.802 EOF 00:21:01.802 )") 00:21:01.802 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:01.803 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:01.803 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:01.803 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:01.803 "params": { 00:21:01.803 "name": "Nvme1", 00:21:01.803 "trtype": "tcp", 00:21:01.803 "traddr": "10.0.0.2", 00:21:01.803 "adrfam": "ipv4", 00:21:01.803 "trsvcid": "4420", 00:21:01.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.803 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.803 "hdgst": false, 00:21:01.803 "ddgst": false 00:21:01.803 }, 00:21:01.803 "method": "bdev_nvme_attach_controller" 00:21:01.803 },{ 00:21:01.803 "params": { 00:21:01.803 "name": "Nvme2", 00:21:01.803 "trtype": "tcp", 00:21:01.803 "traddr": "10.0.0.2", 00:21:01.803 "adrfam": "ipv4", 00:21:01.803 "trsvcid": "4420", 00:21:01.803 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:01.803 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:01.803 "hdgst": false, 00:21:01.803 "ddgst": false 00:21:01.803 }, 00:21:01.803 "method": "bdev_nvme_attach_controller" 00:21:01.803 },{ 00:21:01.803 "params": { 00:21:01.803 "name": "Nvme3", 00:21:01.803 "trtype": "tcp", 00:21:01.803 "traddr": "10.0.0.2", 00:21:01.803 "adrfam": "ipv4", 00:21:01.803 "trsvcid": "4420", 00:21:01.803 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:01.803 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:01.803 "hdgst": false, 00:21:01.803 "ddgst": false 00:21:01.803 }, 00:21:01.803 "method": "bdev_nvme_attach_controller" 00:21:01.803 },{ 00:21:01.803 "params": { 00:21:01.803 "name": "Nvme4", 00:21:01.803 "trtype": "tcp", 00:21:01.803 "traddr": "10.0.0.2", 00:21:01.803 "adrfam": "ipv4", 00:21:01.803 "trsvcid": "4420", 00:21:01.803 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:01.803 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:01.803 "hdgst": false, 00:21:01.803 "ddgst": false 00:21:01.803 }, 00:21:01.803 "method": "bdev_nvme_attach_controller" 00:21:01.803 },{ 00:21:01.803 "params": { 00:21:01.803 "name": "Nvme5", 00:21:01.803 "trtype": "tcp", 00:21:01.803 "traddr": "10.0.0.2", 00:21:01.803 "adrfam": "ipv4", 00:21:01.803 "trsvcid": "4420", 00:21:01.803 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:01.803 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:01.803 "hdgst": false, 00:21:01.803 "ddgst": false 00:21:01.803 }, 00:21:01.803 "method": "bdev_nvme_attach_controller" 00:21:01.803 },{ 00:21:01.803 "params": { 00:21:01.803 "name": "Nvme6", 00:21:01.803 "trtype": "tcp", 00:21:01.803 "traddr": "10.0.0.2", 00:21:01.803 "adrfam": "ipv4", 00:21:01.803 "trsvcid": "4420", 00:21:01.803 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:01.803 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:01.803 "hdgst": false, 00:21:01.803 "ddgst": false 00:21:01.803 }, 00:21:01.803 "method": "bdev_nvme_attach_controller" 00:21:01.803 },{ 00:21:01.803 "params": { 00:21:01.803 "name": "Nvme7", 00:21:01.803 "trtype": "tcp", 00:21:01.803 "traddr": "10.0.0.2", 00:21:01.803 "adrfam": "ipv4", 00:21:01.803 "trsvcid": "4420", 00:21:01.803 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:01.803 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:01.803 "hdgst": false, 00:21:01.803 "ddgst": false 00:21:01.803 }, 00:21:01.803 "method": "bdev_nvme_attach_controller" 00:21:01.803 },{ 00:21:01.803 "params": { 00:21:01.803 "name": "Nvme8", 00:21:01.803 "trtype": "tcp", 00:21:01.803 "traddr": "10.0.0.2", 00:21:01.803 "adrfam": "ipv4", 00:21:01.803 "trsvcid": "4420", 00:21:01.803 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:01.803 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:01.803 "hdgst": false, 00:21:01.803 "ddgst": false 00:21:01.803 }, 00:21:01.803 "method": "bdev_nvme_attach_controller" 00:21:01.803 },{ 00:21:01.803 "params": { 00:21:01.803 "name": "Nvme9", 00:21:01.803 "trtype": "tcp", 00:21:01.803 "traddr": "10.0.0.2", 00:21:01.803 "adrfam": "ipv4", 00:21:01.803 "trsvcid": "4420", 00:21:01.803 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:01.803 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:01.803 "hdgst": false, 00:21:01.803 "ddgst": false 00:21:01.803 }, 00:21:01.803 "method": "bdev_nvme_attach_controller" 00:21:01.803 },{ 00:21:01.803 "params": { 00:21:01.803 "name": "Nvme10", 00:21:01.803 "trtype": "tcp", 00:21:01.803 "traddr": "10.0.0.2", 00:21:01.803 "adrfam": "ipv4", 00:21:01.803 "trsvcid": "4420", 00:21:01.803 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:01.803 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:01.803 "hdgst": false, 00:21:01.803 "ddgst": false 00:21:01.803 }, 00:21:01.803 "method": "bdev_nvme_attach_controller" 00:21:01.803 }' 00:21:01.803 [2024-11-15 12:42:41.979914] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:21:01.803 [2024-11-15 12:42:41.979995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1069280 ] 00:21:01.803 [2024-11-15 12:42:42.052205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.803 [2024-11-15 12:42:42.112076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.700 Running I/O for 10 seconds... 00:21:03.700 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.700 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:03.701 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:03.701 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.701 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:03.701 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.701 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:03.701 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:03.701 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:03.701 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:03.701 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:03.701 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:03.701 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:03.701 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:03.701 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:03.701 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.701 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:03.701 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.959 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:03.959 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:03.959 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:04.217 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:04.217 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:04.217 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:04.217 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:04.217 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.217 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:04.217 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.217 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:04.217 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:04.217 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1069280 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1069280 ']' 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1069280 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1069280 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1069280' 00:21:04.475 killing process with pid 1069280 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1069280 00:21:04.475 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1069280 00:21:04.475 2125.00 IOPS, 132.81 MiB/s [2024-11-15T11:42:44.819Z] Received shutdown signal, test time was about 1.028504 seconds 00:21:04.475 00:21:04.475 Latency(us) 00:21:04.475 [2024-11-15T11:42:44.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.475 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.475 Verification LBA range: start 0x0 length 0x400 00:21:04.475 Nvme1n1 : 1.02 250.15 15.63 0.00 0.00 252430.79 24855.13 253211.69 00:21:04.475 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.475 Verification LBA range: start 0x0 length 0x400 00:21:04.475 Nvme2n1 : 1.00 256.93 16.06 0.00 0.00 241348.46 30874.74 254765.13 00:21:04.475 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.475 Verification LBA range: start 0x0 length 0x400 00:21:04.475 Nvme3n1 : 0.99 258.82 16.18 0.00 0.00 235204.46 20097.71 254765.13 00:21:04.475 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.475 Verification LBA range: start 0x0 length 0x400 00:21:04.475 Nvme4n1 : 1.00 259.34 16.21 0.00 0.00 229763.92 2572.89 250104.79 00:21:04.475 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.475 Verification LBA range: start 0x0 length 0x400 00:21:04.475 Nvme5n1 : 1.03 249.11 15.57 0.00 0.00 234978.61 21845.33 273406.48 00:21:04.475 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.475 Verification LBA range: start 0x0 length 0x400 00:21:04.475 Nvme6n1 : 0.96 199.27 12.45 0.00 0.00 287159.69 19903.53 260978.92 00:21:04.475 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.475 Verification LBA range: start 0x0 length 0x400 00:21:04.475 Nvme7n1 : 0.97 198.74 12.42 0.00 0.00 282203.78 31845.64 243891.01 00:21:04.475 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.475 Verification LBA range: start 0x0 length 0x400 00:21:04.475 Nvme8n1 : 1.02 256.01 16.00 0.00 0.00 215951.81 5776.88 250104.79 00:21:04.475 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.475 Verification LBA range: start 0x0 length 0x400 00:21:04.475 Nvme9n1 : 0.98 195.07 12.19 0.00 0.00 276355.60 21748.24 270299.59 00:21:04.475 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.475 Verification LBA range: start 0x0 length 0x400 00:21:04.475 Nvme10n1 : 0.98 196.24 12.27 0.00 0.00 268643.49 21554.06 288940.94 00:21:04.475 [2024-11-15T11:42:44.819Z] =================================================================================================================== 00:21:04.475 [2024-11-15T11:42:44.819Z] Total : 2319.67 144.98 0.00 0.00 249396.30 2572.89 288940.94 00:21:04.733 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1069212 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:06.106 rmmod nvme_tcp 00:21:06.106 rmmod nvme_fabrics 00:21:06.106 rmmod nvme_keyring 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1069212 ']' 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1069212 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1069212 ']' 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1069212 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1069212 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1069212' 00:21:06.106 killing process with pid 1069212 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1069212 00:21:06.106 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1069212 00:21:06.364 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:06.364 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:06.364 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:06.364 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:06.364 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:06.364 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:06.364 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:06.364 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:06.364 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:06.364 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.364 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.364 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:08.900 00:21:08.900 real 0m7.843s 00:21:08.900 user 0m24.328s 00:21:08.900 sys 0m1.428s 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:08.900 ************************************ 00:21:08.900 END TEST nvmf_shutdown_tc2 00:21:08.900 ************************************ 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:08.900 ************************************ 00:21:08.900 START TEST nvmf_shutdown_tc3 00:21:08.900 ************************************ 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:08.900 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:08.900 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:08.900 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:08.901 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:08.901 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:08.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:21:08.901 00:21:08.901 --- 10.0.0.2 ping statistics --- 00:21:08.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.901 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:21:08.901 00:21:08.901 --- 10.0.0.1 ping statistics --- 00:21:08.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.901 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:08.901 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:08.901 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:08.901 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:08.901 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:08.901 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.901 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1070314 00:21:08.901 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:08.901 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1070314 00:21:08.901 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1070314 ']' 00:21:08.901 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.901 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.901 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.901 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.901 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.901 [2024-11-15 12:42:49.077199] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:21:08.901 [2024-11-15 12:42:49.077279] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.901 [2024-11-15 12:42:49.148157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.901 [2024-11-15 12:42:49.207205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.901 [2024-11-15 12:42:49.207260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.901 [2024-11-15 12:42:49.207289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.901 [2024-11-15 12:42:49.207302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.901 [2024-11-15 12:42:49.207312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.901 [2024-11-15 12:42:49.208912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.901 [2024-11-15 12:42:49.208971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.901 [2024-11-15 12:42:49.209038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:08.901 [2024-11-15 12:42:49.209041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.159 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.159 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:09.159 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:09.159 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.159 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:09.159 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.159 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:09.159 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.159 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:09.160 [2024-11-15 12:42:49.360416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.160 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:09.160 Malloc1 00:21:09.160 [2024-11-15 12:42:49.457928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.160 Malloc2 00:21:09.418 Malloc3 00:21:09.418 Malloc4 00:21:09.418 Malloc5 00:21:09.418 Malloc6 00:21:09.418 Malloc7 00:21:09.676 Malloc8 00:21:09.676 Malloc9 00:21:09.676 Malloc10 00:21:09.676 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.676 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:09.676 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.676 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:09.676 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1070376 00:21:09.676 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1070376 /var/tmp/bdevperf.sock 00:21:09.676 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1070376 ']' 00:21:09.676 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.676 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:09.676 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.676 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:09.676 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.676 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.676 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.677 { 00:21:09.677 "params": { 00:21:09.677 "name": "Nvme$subsystem", 00:21:09.677 "trtype": "$TEST_TRANSPORT", 00:21:09.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.677 "adrfam": "ipv4", 00:21:09.677 "trsvcid": "$NVMF_PORT", 00:21:09.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.677 "hdgst": ${hdgst:-false}, 00:21:09.677 "ddgst": ${ddgst:-false} 00:21:09.677 }, 00:21:09.677 "method": "bdev_nvme_attach_controller" 00:21:09.677 } 00:21:09.677 EOF 00:21:09.677 )") 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.677 { 00:21:09.677 "params": { 00:21:09.677 "name": "Nvme$subsystem", 00:21:09.677 "trtype": "$TEST_TRANSPORT", 00:21:09.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.677 "adrfam": "ipv4", 00:21:09.677 "trsvcid": "$NVMF_PORT", 00:21:09.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.677 "hdgst": ${hdgst:-false}, 00:21:09.677 "ddgst": ${ddgst:-false} 00:21:09.677 }, 00:21:09.677 "method": "bdev_nvme_attach_controller" 00:21:09.677 } 00:21:09.677 EOF 00:21:09.677 )") 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.677 { 00:21:09.677 "params": { 00:21:09.677 "name": "Nvme$subsystem", 00:21:09.677 "trtype": "$TEST_TRANSPORT", 00:21:09.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.677 "adrfam": "ipv4", 00:21:09.677 "trsvcid": "$NVMF_PORT", 00:21:09.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.677 "hdgst": ${hdgst:-false}, 00:21:09.677 "ddgst": ${ddgst:-false} 00:21:09.677 }, 00:21:09.677 "method": "bdev_nvme_attach_controller" 00:21:09.677 } 00:21:09.677 EOF 00:21:09.677 )") 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.677 { 00:21:09.677 "params": { 00:21:09.677 "name": "Nvme$subsystem", 00:21:09.677 "trtype": "$TEST_TRANSPORT", 00:21:09.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.677 "adrfam": "ipv4", 00:21:09.677 "trsvcid": "$NVMF_PORT", 00:21:09.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.677 "hdgst": ${hdgst:-false}, 00:21:09.677 "ddgst": ${ddgst:-false} 00:21:09.677 }, 00:21:09.677 "method": "bdev_nvme_attach_controller" 00:21:09.677 } 00:21:09.677 EOF 00:21:09.677 )") 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.677 { 00:21:09.677 "params": { 00:21:09.677 "name": "Nvme$subsystem", 00:21:09.677 "trtype": "$TEST_TRANSPORT", 00:21:09.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.677 "adrfam": "ipv4", 00:21:09.677 "trsvcid": "$NVMF_PORT", 00:21:09.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.677 "hdgst": ${hdgst:-false}, 00:21:09.677 "ddgst": ${ddgst:-false} 00:21:09.677 }, 00:21:09.677 "method": "bdev_nvme_attach_controller" 00:21:09.677 } 00:21:09.677 EOF 00:21:09.677 )") 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.677 { 00:21:09.677 "params": { 00:21:09.677 "name": "Nvme$subsystem", 00:21:09.677 "trtype": "$TEST_TRANSPORT", 00:21:09.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.677 "adrfam": "ipv4", 00:21:09.677 "trsvcid": "$NVMF_PORT", 00:21:09.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.677 "hdgst": ${hdgst:-false}, 00:21:09.677 "ddgst": ${ddgst:-false} 00:21:09.677 }, 00:21:09.677 "method": "bdev_nvme_attach_controller" 00:21:09.677 } 00:21:09.677 EOF 00:21:09.677 )") 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.677 { 00:21:09.677 "params": { 00:21:09.677 "name": "Nvme$subsystem", 00:21:09.677 "trtype": "$TEST_TRANSPORT", 00:21:09.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.677 "adrfam": "ipv4", 00:21:09.677 "trsvcid": "$NVMF_PORT", 00:21:09.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.677 "hdgst": ${hdgst:-false}, 00:21:09.677 "ddgst": ${ddgst:-false} 00:21:09.677 }, 00:21:09.677 "method": "bdev_nvme_attach_controller" 00:21:09.677 } 00:21:09.677 EOF 00:21:09.677 )") 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.677 { 00:21:09.677 "params": { 00:21:09.677 "name": "Nvme$subsystem", 00:21:09.677 "trtype": "$TEST_TRANSPORT", 00:21:09.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.677 "adrfam": "ipv4", 00:21:09.677 "trsvcid": "$NVMF_PORT", 00:21:09.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.677 "hdgst": ${hdgst:-false}, 00:21:09.677 "ddgst": ${ddgst:-false} 00:21:09.677 }, 00:21:09.677 "method": "bdev_nvme_attach_controller" 00:21:09.677 } 00:21:09.677 EOF 00:21:09.677 )") 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.677 { 00:21:09.677 "params": { 00:21:09.677 "name": "Nvme$subsystem", 00:21:09.677 "trtype": "$TEST_TRANSPORT", 00:21:09.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.677 "adrfam": "ipv4", 00:21:09.677 "trsvcid": "$NVMF_PORT", 00:21:09.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.677 "hdgst": ${hdgst:-false}, 00:21:09.677 "ddgst": ${ddgst:-false} 00:21:09.677 }, 00:21:09.677 "method": "bdev_nvme_attach_controller" 00:21:09.677 } 00:21:09.677 EOF 00:21:09.677 )") 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.677 { 00:21:09.677 "params": { 00:21:09.677 "name": "Nvme$subsystem", 00:21:09.677 "trtype": "$TEST_TRANSPORT", 00:21:09.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.677 "adrfam": "ipv4", 00:21:09.677 "trsvcid": "$NVMF_PORT", 00:21:09.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.677 "hdgst": ${hdgst:-false}, 00:21:09.677 "ddgst": ${ddgst:-false} 00:21:09.677 }, 00:21:09.677 "method": "bdev_nvme_attach_controller" 00:21:09.677 } 00:21:09.677 EOF 00:21:09.677 )") 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:09.677 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:09.678 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:09.678 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:09.678 "params": { 00:21:09.678 "name": "Nvme1", 00:21:09.678 "trtype": "tcp", 00:21:09.678 "traddr": "10.0.0.2", 00:21:09.678 "adrfam": "ipv4", 00:21:09.678 "trsvcid": "4420", 00:21:09.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:09.678 "hdgst": false, 00:21:09.678 "ddgst": false 00:21:09.678 }, 00:21:09.678 "method": "bdev_nvme_attach_controller" 00:21:09.678 },{ 00:21:09.678 "params": { 00:21:09.678 "name": "Nvme2", 00:21:09.678 "trtype": "tcp", 00:21:09.678 "traddr": "10.0.0.2", 00:21:09.678 "adrfam": "ipv4", 00:21:09.678 "trsvcid": "4420", 00:21:09.678 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:09.678 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:09.678 "hdgst": false, 00:21:09.678 "ddgst": false 00:21:09.678 }, 00:21:09.678 "method": "bdev_nvme_attach_controller" 00:21:09.678 },{ 00:21:09.678 "params": { 00:21:09.678 "name": "Nvme3", 00:21:09.678 "trtype": "tcp", 00:21:09.678 "traddr": "10.0.0.2", 00:21:09.678 "adrfam": "ipv4", 00:21:09.678 "trsvcid": "4420", 00:21:09.678 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:09.678 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:09.678 "hdgst": false, 00:21:09.678 "ddgst": false 00:21:09.678 }, 00:21:09.678 "method": "bdev_nvme_attach_controller" 00:21:09.678 },{ 00:21:09.678 "params": { 00:21:09.678 "name": "Nvme4", 00:21:09.678 "trtype": "tcp", 00:21:09.678 "traddr": "10.0.0.2", 00:21:09.678 "adrfam": "ipv4", 00:21:09.678 "trsvcid": "4420", 00:21:09.678 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:09.678 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:09.678 "hdgst": false, 00:21:09.678 "ddgst": false 00:21:09.678 }, 00:21:09.678 "method": "bdev_nvme_attach_controller" 00:21:09.678 },{ 00:21:09.678 "params": { 00:21:09.678 "name": "Nvme5", 00:21:09.678 "trtype": "tcp", 00:21:09.678 "traddr": "10.0.0.2", 00:21:09.678 "adrfam": "ipv4", 00:21:09.678 "trsvcid": "4420", 00:21:09.678 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:09.678 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:09.678 "hdgst": false, 00:21:09.678 "ddgst": false 00:21:09.678 }, 00:21:09.678 "method": "bdev_nvme_attach_controller" 00:21:09.678 },{ 00:21:09.678 "params": { 00:21:09.678 "name": "Nvme6", 00:21:09.678 "trtype": "tcp", 00:21:09.678 "traddr": "10.0.0.2", 00:21:09.678 "adrfam": "ipv4", 00:21:09.678 "trsvcid": "4420", 00:21:09.678 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:09.678 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:09.678 "hdgst": false, 00:21:09.678 "ddgst": false 00:21:09.678 }, 00:21:09.678 "method": "bdev_nvme_attach_controller" 00:21:09.678 },{ 00:21:09.678 "params": { 00:21:09.678 "name": "Nvme7", 00:21:09.678 "trtype": "tcp", 00:21:09.678 "traddr": "10.0.0.2", 00:21:09.678 "adrfam": "ipv4", 00:21:09.678 "trsvcid": "4420", 00:21:09.678 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:09.678 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:09.678 "hdgst": false, 00:21:09.678 "ddgst": false 00:21:09.678 }, 00:21:09.678 "method": "bdev_nvme_attach_controller" 00:21:09.678 },{ 00:21:09.678 "params": { 00:21:09.678 "name": "Nvme8", 00:21:09.678 "trtype": "tcp", 00:21:09.678 "traddr": "10.0.0.2", 00:21:09.678 "adrfam": "ipv4", 00:21:09.678 "trsvcid": "4420", 00:21:09.678 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:09.678 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:09.678 "hdgst": false, 00:21:09.678 "ddgst": false 00:21:09.678 }, 00:21:09.678 "method": "bdev_nvme_attach_controller" 00:21:09.678 },{ 00:21:09.678 "params": { 00:21:09.678 "name": "Nvme9", 00:21:09.678 "trtype": "tcp", 00:21:09.678 "traddr": "10.0.0.2", 00:21:09.678 "adrfam": "ipv4", 00:21:09.678 "trsvcid": "4420", 00:21:09.678 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:09.678 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:09.678 "hdgst": false, 00:21:09.678 "ddgst": false 00:21:09.678 }, 00:21:09.678 "method": "bdev_nvme_attach_controller" 00:21:09.678 },{ 00:21:09.678 "params": { 00:21:09.678 "name": "Nvme10", 00:21:09.678 "trtype": "tcp", 00:21:09.678 "traddr": "10.0.0.2", 00:21:09.678 "adrfam": "ipv4", 00:21:09.678 "trsvcid": "4420", 00:21:09.678 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:09.678 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:09.678 "hdgst": false, 00:21:09.678 "ddgst": false 00:21:09.678 }, 00:21:09.678 "method": "bdev_nvme_attach_controller" 00:21:09.678 }' 00:21:09.678 [2024-11-15 12:42:49.981272] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:21:09.678 [2024-11-15 12:42:49.981351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1070376 ] 00:21:09.944 [2024-11-15 12:42:50.055619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.944 [2024-11-15 12:42:50.117912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.839 Running I/O for 10 seconds... 00:21:11.839 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.839 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:11.840 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:12.098 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:12.098 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:12.098 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:12.098 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:12.098 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.098 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:12.098 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.098 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:12.098 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:12.098 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:12.355 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:12.355 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:12.356 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:12.356 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.356 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:12.356 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:12.356 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1070314 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1070314 ']' 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1070314 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1070314 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1070314' 00:21:12.630 killing process with pid 1070314 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1070314 00:21:12.630 12:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1070314 00:21:12.630 [2024-11-15 12:42:52.757661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.630 [2024-11-15 12:42:52.757756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.630 [2024-11-15 12:42:52.757783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.630 [2024-11-15 12:42:52.757796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.630 [2024-11-15 12:42:52.757809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.630 [2024-11-15 12:42:52.757821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.630 [2024-11-15 12:42:52.757833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.630 [2024-11-15 12:42:52.757846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.757858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.757870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.757882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.757895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.757907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.757919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.757932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.757944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.757956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.757968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.757981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.757993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.758552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0d1b0 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.631 [2024-11-15 12:42:52.760636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.760991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.761003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.761014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.761033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.761046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.761059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.761071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee010 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.763032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.632 [2024-11-15 12:42:52.763074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.763102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.632 [2024-11-15 12:42:52.763124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.763139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.632 [2024-11-15 12:42:52.763152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.763167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.632 [2024-11-15 12:42:52.763180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.763193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf060 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.763281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.632 [2024-11-15 12:42:52.763303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.763318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.632 [2024-11-15 12:42:52.763331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.763345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.632 [2024-11-15 12:42:52.763357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.763371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.632 [2024-11-15 12:42:52.763384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.763397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9736f0 is same with the state(6) to be set 00:21:12.632 [2024-11-15 12:42:52.763821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.632 [2024-11-15 12:42:52.763849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.763876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.632 [2024-11-15 12:42:52.763891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.763908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.632 [2024-11-15 12:42:52.763923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.763938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.632 [2024-11-15 12:42:52.763952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.763967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.632 [2024-11-15 12:42:52.763981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.764004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.632 [2024-11-15 12:42:52.764020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.764038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.632 [2024-11-15 12:42:52.764052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.764067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.632 [2024-11-15 12:42:52.764081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.764098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.632 [2024-11-15 12:42:52.764112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.764127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.632 [2024-11-15 12:42:52.764140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.764156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.632 [2024-11-15 12:42:52.764170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.764185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.632 [2024-11-15 12:42:52.764198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.764213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.632 [2024-11-15 12:42:52.764227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.764242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.632 [2024-11-15 12:42:52.764256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.764270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.632 [2024-11-15 12:42:52.764284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.632 [2024-11-15 12:42:52.764299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with t[2024-11-15 12:42:52.764494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:12.633 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with t[2024-11-15 12:42:52.764601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1he state(6) to be set 00:21:12.633 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 12:42:52.764620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 he state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with t[2024-11-15 12:42:52.764738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1he state(6) to be set 00:21:12.633 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with t[2024-11-15 12:42:52.764755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:12.633 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with t[2024-11-15 12:42:52.764900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1he state(6) to be set 00:21:12.633 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.764968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.633 [2024-11-15 12:42:52.764981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.633 [2024-11-15 12:42:52.764995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.633 [2024-11-15 12:42:52.765007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with t[2024-11-15 12:42:52.765136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1he state(6) to be set 00:21:12.634 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 12:42:52.765216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 he state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1[2024-11-15 12:42:52.765297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 he state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with t[2024-11-15 12:42:52.765311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:12.634 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0db50 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.765359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.634 [2024-11-15 12:42:52.765837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.634 [2024-11-15 12:42:52.765882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:12.634 [2024-11-15 12:42:52.767011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.634 [2024-11-15 12:42:52.767051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.767835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e040 is same with the state(6) to be set 00:21:12.635 [2024-11-15 12:42:52.769213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.635 [2024-11-15 12:42:52.769241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.635 [2024-11-15 12:42:52.769263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.635 [2024-11-15 12:42:52.769279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.635 [2024-11-15 12:42:52.769296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.635 [2024-11-15 12:42:52.769310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.635 [2024-11-15 12:42:52.769325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.635 [2024-11-15 12:42:52.769339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.769906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with t[2024-11-15 12:42:52.769920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:12.636 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.769939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.769953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.769969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.769974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.769983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.769987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.769998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1[2024-11-15 12:42:52.770000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 he state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.770014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with t[2024-11-15 12:42:52.770014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:12.636 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.770036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.770038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.770048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.770052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.770061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.770068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1[2024-11-15 12:42:52.770074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 he state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.770097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with t[2024-11-15 12:42:52.770098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:12.636 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.770111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.770115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.770124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.770130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.770137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.770146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.770150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.770160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.770163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.770175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with t[2024-11-15 12:42:52.770175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1he state(6) to be set 00:21:12.636 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.770190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with t[2024-11-15 12:42:52.770192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:12.636 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.770204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.770208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.636 [2024-11-15 12:42:52.770217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.770222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.636 [2024-11-15 12:42:52.770229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.636 [2024-11-15 12:42:52.770238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.637 [2024-11-15 12:42:52.770241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.637 [2024-11-15 12:42:52.770266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.637 [2024-11-15 12:42:52.770284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.637 [2024-11-15 12:42:52.770298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.637 [2024-11-15 12:42:52.770310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.637 [2024-11-15 12:42:52.770323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-11-15 12:42:52.770336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.637 he state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 12:42:52.770350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.637 he state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.637 [2024-11-15 12:42:52.770380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.637 [2024-11-15 12:42:52.770393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.637 [2024-11-15 12:42:52.770407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.637 [2024-11-15 12:42:52.770419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.637 [2024-11-15 12:42:52.770432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.637 [2024-11-15 12:42:52.770445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with t[2024-11-15 12:42:52.770457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1he state(6) to be set 00:21:12.637 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.637 [2024-11-15 12:42:52.770476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.637 [2024-11-15 12:42:52.770489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.637 [2024-11-15 12:42:52.770502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.637 [2024-11-15 12:42:52.770515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.637 [2024-11-15 12:42:52.770528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 12:42:52.770541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.637 he state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.637 [2024-11-15 12:42:52.770568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.637 [2024-11-15 12:42:52.770580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.637 [2024-11-15 12:42:52.770593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.637 [2024-11-15 12:42:52.770606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.637 [2024-11-15 12:42:52.770619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 12:42:52.770632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.637 he state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.637 [2024-11-15 12:42:52.770658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.637 [2024-11-15 12:42:52.770671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.637 [2024-11-15 12:42:52.770684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 12:42:52.770696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.637 he state(6) to be set 00:21:12.637 [2024-11-15 12:42:52.770710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.770712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.770731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.770734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.770745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.770751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.770758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0e9e0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.770782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.770799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.770812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.770827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.770841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.770856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.770869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.770885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.770899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.770914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.770927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.770946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.770961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.770976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.770990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.771004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.771024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.771039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.771053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.771068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.771085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.771100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.771113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.771128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.771142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.771157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.771170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.771186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.771199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.771214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.638 [2024-11-15 12:42:52.771228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.638 [2024-11-15 12:42:52.771262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:12.638 [2024-11-15 12:42:52.771641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:12.638 [2024-11-15 12:42:52.771716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96a220 (9): Bad file descriptor 00:21:12.638 [2024-11-15 12:42:52.772267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.638 [2024-11-15 12:42:52.772497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.772992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.773005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.773021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.773034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.773047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.773059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.773073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.773099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0eeb0 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.773336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:12.639 [2024-11-15 12:42:52.773399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd97090 (9): Bad file descriptor 00:21:12.639 [2024-11-15 12:42:52.773460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.639 [2024-11-15 12:42:52.773481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.639 [2024-11-15 12:42:52.773496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.639 [2024-11-15 12:42:52.773510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.639 [2024-11-15 12:42:52.773524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.639 [2024-11-15 12:42:52.773537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.639 [2024-11-15 12:42:52.773552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.639 [2024-11-15 12:42:52.773565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.639 [2024-11-15 12:42:52.773578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ef70 is same with the state(6) to be set 00:21:12.639 [2024-11-15 12:42:52.773629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf060 (9): Bad file descriptor 00:21:12.639 [2024-11-15 12:42:52.773683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.639 [2024-11-15 12:42:52.773704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.639 [2024-11-15 12:42:52.773726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.640 [2024-11-15 12:42:52.773743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.640 [2024-11-15 12:42:52.773757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.640 [2024-11-15 12:42:52.773780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.640 [2024-11-15 12:42:52.773794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.640 [2024-11-15 12:42:52.773807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.640 [2024-11-15 12:42:52.773820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9e1f0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.773875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.640 [2024-11-15 12:42:52.773896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.640 [2024-11-15 12:42:52.773910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.640 [2024-11-15 12:42:52.773924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.640 [2024-11-15 12:42:52.773937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.640 [2024-11-15 12:42:52.773950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.640 [2024-11-15 12:42:52.773964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.640 [2024-11-15 12:42:52.773977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.640 [2024-11-15 12:42:52.773989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9710b0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.640 [2024-11-15 12:42:52.774063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.640 [2024-11-15 12:42:52.774077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.640 [2024-11-15 12:42:52.774091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.640 [2024-11-15 12:42:52.774105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.640 [2024-11-15 12:42:52.774118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.640 [2024-11-15 12:42:52.774132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.640 [2024-11-15 12:42:52.774145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.640 [2024-11-15 12:42:52.774158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db110 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9736f0 (9): Bad file descriptor 00:21:12.640 [2024-11-15 12:42:52.774286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.640 [2024-11-15 12:42:52.774588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9dde0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.774975] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:12.641 [2024-11-15 12:42:52.775131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.641 [2024-11-15 12:42:52.775159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96a220 with addr=10.0.0.2, port=4420 00:21:12.641 [2024-11-15 12:42:52.775174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a220 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775248] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:12.641 [2024-11-15 12:42:52.775335] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:12.641 [2024-11-15 12:42:52.775700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.775993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.776005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.776027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.776039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.641 [2024-11-15 12:42:52.776051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.642 [2024-11-15 12:42:52.776177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd97090 with addr=10.0.0.2, port=4420 00:21:12.642 [2024-11-15 12:42:52.776203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97090 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96a220 (9): Bad file descriptor 00:21:12.642 [2024-11-15 12:42:52.776242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776370] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:12.642 [2024-11-15 12:42:52.776377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with t[2024-11-15 12:42:52.776453] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:12.642 he state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e2b0 is same with the state(6) to be set 00:21:12.642 [2024-11-15 12:42:52.776659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd97090 (9): Bad file descriptor 00:21:12.642 [2024-11-15 12:42:52.776683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:12.642 [2024-11-15 12:42:52.776697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:12.642 [2024-11-15 12:42:52.776713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:12.642 [2024-11-15 12:42:52.776741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:12.642 [2024-11-15 12:42:52.776791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.642 [2024-11-15 12:42:52.776811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.642 [2024-11-15 12:42:52.776832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.642 [2024-11-15 12:42:52.776848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.642 [2024-11-15 12:42:52.776864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.642 [2024-11-15 12:42:52.776878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.642 [2024-11-15 12:42:52.776894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.642 [2024-11-15 12:42:52.776907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.642 [2024-11-15 12:42:52.776923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.642 [2024-11-15 12:42:52.776936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.642 [2024-11-15 12:42:52.776957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.642 [2024-11-15 12:42:52.776972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.642 [2024-11-15 12:42:52.776986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.643 [2024-11-15 12:42:52.777820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.643 [2024-11-15 12:42:52.777834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.777849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.777862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.777877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.777890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.777905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.777918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.777934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.777947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.777962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.777975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.777990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.644 [2024-11-15 12:42:52.778676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.644 [2024-11-15 12:42:52.778690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.778704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd736f0 is same with the state(6) to be set 00:21:12.645 [2024-11-15 12:42:52.779123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:12.645 [2024-11-15 12:42:52.779146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:12.645 [2024-11-15 12:42:52.779161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:12.645 [2024-11-15 12:42:52.779173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:12.645 [2024-11-15 12:42:52.780367] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:12.645 [2024-11-15 12:42:52.780516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:12.645 [2024-11-15 12:42:52.780548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9ef70 (9): Bad file descriptor 00:21:12.645 [2024-11-15 12:42:52.780677] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:12.645 [2024-11-15 12:42:52.781303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.645 [2024-11-15 12:42:52.781332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9ef70 with addr=10.0.0.2, port=4420 00:21:12.645 [2024-11-15 12:42:52.781348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ef70 is same with the state(6) to be set 00:21:12.645 [2024-11-15 12:42:52.781443] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:12.645 [2024-11-15 12:42:52.781479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9ef70 (9): Bad file descriptor 00:21:12.645 [2024-11-15 12:42:52.781559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:12.645 [2024-11-15 12:42:52.781579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:12.645 [2024-11-15 12:42:52.781592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:12.645 [2024-11-15 12:42:52.781605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:12.645 [2024-11-15 12:42:52.783390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.645 [2024-11-15 12:42:52.783414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.783430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.645 [2024-11-15 12:42:52.783444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.783457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.645 [2024-11-15 12:42:52.783470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.783484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.645 [2024-11-15 12:42:52.783497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.783510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97e90 is same with the state(6) to be set 00:21:12.645 [2024-11-15 12:42:52.783558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.645 [2024-11-15 12:42:52.783579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.783594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.645 [2024-11-15 12:42:52.783607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.783620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.645 [2024-11-15 12:42:52.783633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.783648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.645 [2024-11-15 12:42:52.783660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.783678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd981c0 is same with the state(6) to be set 00:21:12.645 [2024-11-15 12:42:52.783715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9e1f0 (9): Bad file descriptor 00:21:12.645 [2024-11-15 12:42:52.783765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9710b0 (9): Bad file descriptor 00:21:12.645 [2024-11-15 12:42:52.783796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8db110 (9): Bad file descriptor 00:21:12.645 [2024-11-15 12:42:52.783941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.645 [2024-11-15 12:42:52.783964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.783986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.645 [2024-11-15 12:42:52.784001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.784028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.645 [2024-11-15 12:42:52.784043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.784058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.645 [2024-11-15 12:42:52.784072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.784088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.645 [2024-11-15 12:42:52.784103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.784118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.645 [2024-11-15 12:42:52.784131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.784147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.645 [2024-11-15 12:42:52.784161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.784177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.645 [2024-11-15 12:42:52.784191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.784206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.645 [2024-11-15 12:42:52.784220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.784235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.645 [2024-11-15 12:42:52.784249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.645 [2024-11-15 12:42:52.784264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.645 [2024-11-15 12:42:52.784278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.784971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.784986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.646 [2024-11-15 12:42:52.785000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.646 [2024-11-15 12:42:52.785015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.647 [2024-11-15 12:42:52.785887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.647 [2024-11-15 12:42:52.785901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe60a20 is same with the state(6) to be set 00:21:12.648 [2024-11-15 12:42:52.787214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.787975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.787991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.788004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.788028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.788042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.788057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.788070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.648 [2024-11-15 12:42:52.788085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.648 [2024-11-15 12:42:52.788099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.649 [2024-11-15 12:42:52.788959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.649 [2024-11-15 12:42:52.788974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.650 [2024-11-15 12:42:52.788988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.650 [2024-11-15 12:42:52.789008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.650 [2024-11-15 12:42:52.789022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.650 [2024-11-15 12:42:52.789047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.650 [2024-11-15 12:42:52.789061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.650 [2024-11-15 12:42:52.789076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.650 [2024-11-15 12:42:52.789090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.650 [2024-11-15 12:42:52.789105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.650 [2024-11-15 12:42:52.789119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.650 [2024-11-15 12:42:52.789134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.650 [2024-11-15 12:42:52.789148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.650 [2024-11-15 12:42:52.789162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb3a90 is same with the state(6) to be set 00:21:12.650 [2024-11-15 12:42:52.790424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:12.650 [2024-11-15 12:42:52.790454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:12.650 [2024-11-15 12:42:52.790601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:12.650 [2024-11-15 12:42:52.790834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.650 [2024-11-15 12:42:52.790864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9736f0 with addr=10.0.0.2, port=4420 00:21:12.650 [2024-11-15 12:42:52.790880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9736f0 is same with the state(6) to be set 00:21:12.650 [2024-11-15 12:42:52.790984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.650 [2024-11-15 12:42:52.791009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf060 with addr=10.0.0.2, port=4420 00:21:12.650 [2024-11-15 12:42:52.791029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf060 is same with the state(6) to be set 00:21:12.650 [2024-11-15 12:42:52.791609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:12.650 [2024-11-15 12:42:52.791737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.650 [2024-11-15 12:42:52.791766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96a220 with addr=10.0.0.2, port=4420 00:21:12.650 [2024-11-15 12:42:52.791782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a220 is same with the state(6) to be set 00:21:12.650 [2024-11-15 12:42:52.791804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9736f0 (9): Bad file descriptor 00:21:12.650 [2024-11-15 12:42:52.791824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf060 (9): Bad file descriptor 00:21:12.650 [2024-11-15 12:42:52.791981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.650 [2024-11-15 12:42:52.792018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd97090 with addr=10.0.0.2, port=4420 00:21:12.650 [2024-11-15 12:42:52.792039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97090 is same with the state(6) to be set 00:21:12.650 [2024-11-15 12:42:52.792058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96a220 (9): Bad file descriptor 00:21:12.650 [2024-11-15 12:42:52.792075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:12.650 [2024-11-15 12:42:52.792089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:12.650 [2024-11-15 12:42:52.792104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:12.650 [2024-11-15 12:42:52.792121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:12.650 [2024-11-15 12:42:52.792137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:12.650 [2024-11-15 12:42:52.792148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:12.650 [2024-11-15 12:42:52.792161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:12.650 [2024-11-15 12:42:52.792173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:12.650 [2024-11-15 12:42:52.792245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:12.650 [2024-11-15 12:42:52.792278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd97090 (9): Bad file descriptor 00:21:12.650 [2024-11-15 12:42:52.792297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:12.650 [2024-11-15 12:42:52.792310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:12.650 [2024-11-15 12:42:52.792323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:12.650 [2024-11-15 12:42:52.792336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:12.650 [2024-11-15 12:42:52.792456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.650 [2024-11-15 12:42:52.792482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9ef70 with addr=10.0.0.2, port=4420 00:21:12.650 [2024-11-15 12:42:52.792497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ef70 is same with the state(6) to be set 00:21:12.650 [2024-11-15 12:42:52.792511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:12.650 [2024-11-15 12:42:52.792523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:12.650 [2024-11-15 12:42:52.792536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:12.650 [2024-11-15 12:42:52.792548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:12.650 [2024-11-15 12:42:52.792597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9ef70 (9): Bad file descriptor 00:21:12.650 [2024-11-15 12:42:52.792645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:12.650 [2024-11-15 12:42:52.792662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:12.650 [2024-11-15 12:42:52.792675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:12.650 [2024-11-15 12:42:52.792687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:12.650 [2024-11-15 12:42:52.793399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd97e90 (9): Bad file descriptor 00:21:12.650 [2024-11-15 12:42:52.793443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd981c0 (9): Bad file descriptor 00:21:12.650 [2024-11-15 12:42:52.793591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.650 [2024-11-15 12:42:52.793614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.650 [2024-11-15 12:42:52.793639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.650 [2024-11-15 12:42:52.793655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.793671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.793686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.793702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.793716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.793742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.793756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.793779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.793793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.793809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.793823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.793839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.793853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.793869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.793883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.793898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.793913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.793928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.793942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.793958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.793972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.793993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.651 [2024-11-15 12:42:52.794547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.651 [2024-11-15 12:42:52.794561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.794576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.794591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.794607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.794621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.794636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.794651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.794666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.794680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.794696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.794710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.794732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.794748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.794774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.794792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.794809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.794823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.794838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.794853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.794868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.794882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.794898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.794911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.794927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.794941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.794957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.794971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.794986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.795000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.795016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.795034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.795049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.795062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.795078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.795092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.795107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.795120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.795136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.795150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.795169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.795183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.795199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.795213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.795228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.795242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.795257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.795271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.795286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.652 [2024-11-15 12:42:52.795300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.652 [2024-11-15 12:42:52.795315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.795329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.795345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.795358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.795373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.795387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.795403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.795416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.795432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.795446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.795461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.795475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.795491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.795505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.795520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.795538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.795554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.795567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.795581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb784b0 is same with the state(6) to be set 00:21:12.653 [2024-11-15 12:42:52.796869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.796892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.796912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.796927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.796943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.796957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.796973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.796986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.797016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.797046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.797079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.797108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.797138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.797167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.797202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.797232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.797261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.797291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.797320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.797349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.797379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.797409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.653 [2024-11-15 12:42:52.797439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.653 [2024-11-15 12:42:52.797454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.797980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.797994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.798010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.798024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.798040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.798054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.798069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.798083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.798098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.798112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.798127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.798141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.798157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.798171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.798186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.798200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.798216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.798230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.798245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.798259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.798274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.798288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.798304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.798317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.654 [2024-11-15 12:42:52.798336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.654 [2024-11-15 12:42:52.798350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.798379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.798408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.798438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.798466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.798496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.798525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.798554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.798583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.798612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.798642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.798671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.798705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.798742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.798775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.798805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.798819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f50 is same with the state(6) to be set 00:21:12.655 [2024-11-15 12:42:52.800066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.800089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.800109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.800124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.800140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.800154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.800170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.800184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.800200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.800214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.800229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.800243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.800259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.800273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.800288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.800302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.800318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.800337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.800353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.800367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.800383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.800397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.800413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.800427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.800442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.655 [2024-11-15 12:42:52.800456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.655 [2024-11-15 12:42:52.800472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.800974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.800989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.801003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.801019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.801037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.801052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.801066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.801081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.801098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.801114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.801128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.801143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.801156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.801172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.801186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.801201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.801215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.801230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.801243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.801259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.801273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.656 [2024-11-15 12:42:52.801288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.656 [2024-11-15 12:42:52.801301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.801974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.801989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.657 [2024-11-15 12:42:52.802003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.657 [2024-11-15 12:42:52.802017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd77500 is same with the state(6) to be set 00:21:12.657 [2024-11-15 12:42:52.803257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:12.657 [2024-11-15 12:42:52.803287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:12.657 [2024-11-15 12:42:52.803305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:12.657 [2024-11-15 12:42:52.803709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.657 [2024-11-15 12:42:52.803745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9710b0 with addr=10.0.0.2, port=4420 00:21:12.657 [2024-11-15 12:42:52.803763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9710b0 is same with the state(6) to be set 00:21:12.657 [2024-11-15 12:42:52.803889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.657 [2024-11-15 12:42:52.803913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8db110 with addr=10.0.0.2, port=4420 00:21:12.657 [2024-11-15 12:42:52.803929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db110 is same with the state(6) to be set 00:21:12.657 [2024-11-15 12:42:52.804021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.658 [2024-11-15 12:42:52.804045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9e1f0 with addr=10.0.0.2, port=4420 00:21:12.658 [2024-11-15 12:42:52.804060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9e1f0 is same with the state(6) to be set 00:21:12.658 [2024-11-15 12:42:52.804914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:12.658 [2024-11-15 12:42:52.804941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:12.658 [2024-11-15 12:42:52.804957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:12.658 [2024-11-15 12:42:52.804973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:12.658 [2024-11-15 12:42:52.804994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:12.658 [2024-11-15 12:42:52.805060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9710b0 (9): Bad file descriptor 00:21:12.658 [2024-11-15 12:42:52.805085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8db110 (9): Bad file descriptor 00:21:12.658 [2024-11-15 12:42:52.805103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9e1f0 (9): Bad file descriptor 00:21:12.658 [2024-11-15 12:42:52.805188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.658 [2024-11-15 12:42:52.805885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.658 [2024-11-15 12:42:52.805901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.805918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.805934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.805948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.805963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.805977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.805992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.659 [2024-11-15 12:42:52.806565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.659 [2024-11-15 12:42:52.806580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.806593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.806609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.806623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.806638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.806656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.806672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.806686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.806701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.806714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.806745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.806759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.806775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.806789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.806804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.806818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.806833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.806846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.806862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.806876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.806889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd78a90 is same with the state(6) to be set 00:21:12.660 [2024-11-15 12:42:52.808104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.660 [2024-11-15 12:42:52.808646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.660 [2024-11-15 12:42:52.808661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.808675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.808691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.808704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.808726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.808742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.808759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.808773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.808788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.808802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.808817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.808832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.808848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.808862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.808877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.808891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.808906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.808920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.808935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.808949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.808964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.808978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.808993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.661 [2024-11-15 12:42:52.809480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.661 [2024-11-15 12:42:52.809495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.809980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.809995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-11-15 12:42:52.810028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.662 [2024-11-15 12:42:52.810044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd79fd0 is same with the state(6) to be set 00:21:12.662 [2024-11-15 12:42:52.811675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:12.662 task offset: 26368 on job bdev=Nvme2n1 fails 00:21:12.662 00:21:12.662 Latency(us) 00:21:12.662 [2024-11-15T11:42:53.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.662 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.662 Job: Nvme1n1 ended in about 0.91 seconds with error 00:21:12.662 Verification LBA range: start 0x0 length 0x400 00:21:12.662 Nvme1n1 : 0.91 140.17 8.76 70.09 0.00 300968.01 39224.51 237677.23 00:21:12.662 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.662 Job: Nvme2n1 ended in about 0.89 seconds with error 00:21:12.662 Verification LBA range: start 0x0 length 0x400 00:21:12.662 Nvme2n1 : 0.89 214.61 13.41 71.54 0.00 216485.78 3932.16 243891.01 00:21:12.662 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.662 Job: Nvme3n1 ended in about 0.92 seconds with error 00:21:12.662 Verification LBA range: start 0x0 length 0x400 00:21:12.662 Nvme3n1 : 0.92 213.48 13.34 69.35 0.00 214690.43 18544.26 248551.35 00:21:12.662 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.662 Job: Nvme4n1 ended in about 0.91 seconds with error 00:21:12.662 Verification LBA range: start 0x0 length 0x400 00:21:12.662 Nvme4n1 : 0.91 211.82 13.24 70.61 0.00 210088.53 3956.43 245444.46 00:21:12.662 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.662 Job: Nvme5n1 ended in about 0.90 seconds with error 00:21:12.662 Verification LBA range: start 0x0 length 0x400 00:21:12.662 Nvme5n1 : 0.90 213.53 13.35 71.18 0.00 203643.69 3070.48 251658.24 00:21:12.662 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.662 Job: Nvme6n1 ended in about 0.93 seconds with error 00:21:12.662 Verification LBA range: start 0x0 length 0x400 00:21:12.662 Nvme6n1 : 0.93 138.23 8.64 69.11 0.00 274519.48 21554.06 274959.93 00:21:12.662 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.662 Job: Nvme7n1 ended in about 0.93 seconds with error 00:21:12.662 Verification LBA range: start 0x0 length 0x400 00:21:12.662 Nvme7n1 : 0.93 137.75 8.61 68.88 0.00 269648.28 18447.17 254765.13 00:21:12.662 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.662 Job: Nvme8n1 ended in about 0.93 seconds with error 00:21:12.663 Verification LBA range: start 0x0 length 0x400 00:21:12.663 Nvme8n1 : 0.93 145.61 9.10 59.96 0.00 264114.63 15728.64 260978.92 00:21:12.663 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.663 Job: Nvme9n1 ended in about 0.94 seconds with error 00:21:12.663 Verification LBA range: start 0x0 length 0x400 00:21:12.663 Nvme9n1 : 0.94 136.58 8.54 68.29 0.00 260309.90 21845.33 270299.59 00:21:12.663 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:12.663 Job: Nvme10n1 ended in about 0.92 seconds with error 00:21:12.663 Verification LBA range: start 0x0 length 0x400 00:21:12.663 Nvme10n1 : 0.92 139.68 8.73 69.84 0.00 247483.86 21262.79 279620.27 00:21:12.663 [2024-11-15T11:42:53.007Z] =================================================================================================================== 00:21:12.663 [2024-11-15T11:42:53.007Z] Total : 1691.46 105.72 688.84 0.00 242018.57 3070.48 279620.27 00:21:12.663 [2024-11-15 12:42:52.840853] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:12.663 [2024-11-15 12:42:52.840942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:12.663 [2024-11-15 12:42:52.841206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.663 [2024-11-15 12:42:52.841242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf060 with addr=10.0.0.2, port=4420 00:21:12.663 [2024-11-15 12:42:52.841263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf060 is same with the state(6) to be set 00:21:12.663 [2024-11-15 12:42:52.841380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.663 [2024-11-15 12:42:52.841406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9736f0 with addr=10.0.0.2, port=4420 00:21:12.663 [2024-11-15 12:42:52.841422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9736f0 is same with the state(6) to be set 00:21:12.663 [2024-11-15 12:42:52.841503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.663 [2024-11-15 12:42:52.841529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96a220 with addr=10.0.0.2, port=4420 00:21:12.663 [2024-11-15 12:42:52.841545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96a220 is same with the state(6) to be set 00:21:12.663 [2024-11-15 12:42:52.841645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.663 [2024-11-15 12:42:52.841671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd97090 with addr=10.0.0.2, port=4420 00:21:12.663 [2024-11-15 12:42:52.841687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97090 is same with the state(6) to be set 00:21:12.663 [2024-11-15 12:42:52.841792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.663 [2024-11-15 12:42:52.841819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9ef70 with addr=10.0.0.2, port=4420 00:21:12.663 [2024-11-15 12:42:52.841835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ef70 is same with the state(6) to be set 00:21:12.663 [2024-11-15 12:42:52.841851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:12.663 [2024-11-15 12:42:52.841864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:12.663 [2024-11-15 12:42:52.841879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:12.663 [2024-11-15 12:42:52.841898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:12.663 [2024-11-15 12:42:52.841915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:12.663 [2024-11-15 12:42:52.841928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:12.663 [2024-11-15 12:42:52.841941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:12.663 [2024-11-15 12:42:52.841954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:12.663 [2024-11-15 12:42:52.841968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:12.663 [2024-11-15 12:42:52.841979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:12.663 [2024-11-15 12:42:52.841991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:12.663 [2024-11-15 12:42:52.842003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:12.663 [2024-11-15 12:42:52.842303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.663 [2024-11-15 12:42:52.842334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd97e90 with addr=10.0.0.2, port=4420 00:21:12.663 [2024-11-15 12:42:52.842350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97e90 is same with the state(6) to be set 00:21:12.663 [2024-11-15 12:42:52.842423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.663 [2024-11-15 12:42:52.842449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd981c0 with addr=10.0.0.2, port=4420 00:21:12.663 [2024-11-15 12:42:52.842464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd981c0 is same with the state(6) to be set 00:21:12.663 [2024-11-15 12:42:52.842488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf060 (9): Bad file descriptor 00:21:12.663 [2024-11-15 12:42:52.842511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9736f0 (9): Bad file descriptor 00:21:12.663 [2024-11-15 12:42:52.842529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96a220 (9): Bad file descriptor 00:21:12.663 [2024-11-15 12:42:52.842547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd97090 (9): Bad file descriptor 00:21:12.663 [2024-11-15 12:42:52.842564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9ef70 (9): Bad file descriptor 00:21:12.663 [2024-11-15 12:42:52.842621] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:21:12.663 [2024-11-15 12:42:52.842646] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:12.663 [2024-11-15 12:42:52.842666] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:21:12.663 [2024-11-15 12:42:52.842686] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:21:12.663 [2024-11-15 12:42:52.842707] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:12.663 [2024-11-15 12:42:52.843339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd97e90 (9): Bad file descriptor 00:21:12.663 [2024-11-15 12:42:52.843368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd981c0 (9): Bad file descriptor 00:21:12.663 [2024-11-15 12:42:52.843386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:12.663 [2024-11-15 12:42:52.843398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:12.663 [2024-11-15 12:42:52.843411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:12.663 [2024-11-15 12:42:52.843425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:12.663 [2024-11-15 12:42:52.843439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:12.663 [2024-11-15 12:42:52.843451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:12.663 [2024-11-15 12:42:52.843464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:12.663 [2024-11-15 12:42:52.843475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:12.664 [2024-11-15 12:42:52.843488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:12.664 [2024-11-15 12:42:52.843500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:12.664 [2024-11-15 12:42:52.843513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:12.664 [2024-11-15 12:42:52.843524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:12.664 [2024-11-15 12:42:52.843537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:12.664 [2024-11-15 12:42:52.843549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:12.664 [2024-11-15 12:42:52.843562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:12.664 [2024-11-15 12:42:52.843573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:12.664 [2024-11-15 12:42:52.843586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:12.664 [2024-11-15 12:42:52.843597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:12.664 [2024-11-15 12:42:52.843609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:12.664 [2024-11-15 12:42:52.843621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:12.664 [2024-11-15 12:42:52.843689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:12.664 [2024-11-15 12:42:52.843732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:12.664 [2024-11-15 12:42:52.843752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:12.664 [2024-11-15 12:42:52.843789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:12.664 [2024-11-15 12:42:52.843806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:12.664 [2024-11-15 12:42:52.843819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:12.664 [2024-11-15 12:42:52.843831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:12.664 [2024-11-15 12:42:52.843844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:12.664 [2024-11-15 12:42:52.843856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:12.664 [2024-11-15 12:42:52.843868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:12.664 [2024-11-15 12:42:52.843880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:12.664 [2024-11-15 12:42:52.844017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.664 [2024-11-15 12:42:52.844043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9e1f0 with addr=10.0.0.2, port=4420 00:21:12.664 [2024-11-15 12:42:52.844059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9e1f0 is same with the state(6) to be set 00:21:12.664 [2024-11-15 12:42:52.844144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.664 [2024-11-15 12:42:52.844168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8db110 with addr=10.0.0.2, port=4420 00:21:12.664 [2024-11-15 12:42:52.844183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db110 is same with the state(6) to be set 00:21:12.664 [2024-11-15 12:42:52.844251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:12.664 [2024-11-15 12:42:52.844275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9710b0 with addr=10.0.0.2, port=4420 00:21:12.664 [2024-11-15 12:42:52.844290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9710b0 is same with the state(6) to be set 00:21:12.664 [2024-11-15 12:42:52.844335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9e1f0 (9): Bad file descriptor 00:21:12.664 [2024-11-15 12:42:52.844359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8db110 (9): Bad file descriptor 00:21:12.664 [2024-11-15 12:42:52.844377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9710b0 (9): Bad file descriptor 00:21:12.664 [2024-11-15 12:42:52.844416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:12.664 [2024-11-15 12:42:52.844434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:12.664 [2024-11-15 12:42:52.844447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:12.664 [2024-11-15 12:42:52.844460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:12.664 [2024-11-15 12:42:52.844474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:12.664 [2024-11-15 12:42:52.844486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:12.664 [2024-11-15 12:42:52.844498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:12.664 [2024-11-15 12:42:52.844515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:12.664 [2024-11-15 12:42:52.844529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:12.664 [2024-11-15 12:42:52.844541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:12.664 [2024-11-15 12:42:52.844553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:12.664 [2024-11-15 12:42:52.844564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:13.230 12:42:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:14.163 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1070376 00:21:14.163 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:14.163 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1070376 00:21:14.163 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:14.163 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.163 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:14.163 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.163 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1070376 00:21:14.163 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:14.163 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:14.163 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:14.163 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:14.163 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:14.163 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:14.163 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:14.164 rmmod nvme_tcp 00:21:14.164 rmmod nvme_fabrics 00:21:14.164 rmmod nvme_keyring 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1070314 ']' 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1070314 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1070314 ']' 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1070314 00:21:14.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1070314) - No such process 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1070314 is not found' 00:21:14.164 Process with pid 1070314 is not found 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.164 12:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.700 00:21:16.700 real 0m7.685s 00:21:16.700 user 0m19.124s 00:21:16.700 sys 0m1.436s 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.700 ************************************ 00:21:16.700 END TEST nvmf_shutdown_tc3 00:21:16.700 ************************************ 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:16.700 ************************************ 00:21:16.700 START TEST nvmf_shutdown_tc4 00:21:16.700 ************************************ 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:16.700 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.700 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:16.701 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:16.701 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:16.701 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:16.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:21:16.701 00:21:16.701 --- 10.0.0.2 ping statistics --- 00:21:16.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.701 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:21:16.701 00:21:16.701 --- 10.0.0.1 ping statistics --- 00:21:16.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.701 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1071289 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1071289 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1071289 ']' 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.701 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:16.701 [2024-11-15 12:42:56.721005] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:21:16.701 [2024-11-15 12:42:56.721101] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.701 [2024-11-15 12:42:56.795142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:16.701 [2024-11-15 12:42:56.854120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.701 [2024-11-15 12:42:56.854176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.701 [2024-11-15 12:42:56.854205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.701 [2024-11-15 12:42:56.854217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.702 [2024-11-15 12:42:56.854226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.702 [2024-11-15 12:42:56.855844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.702 [2024-11-15 12:42:56.855906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:16.702 [2024-11-15 12:42:56.855973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:16.702 [2024-11-15 12:42:56.855977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.702 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.702 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:16.702 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:16.702 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:16.702 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:16.702 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.702 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:16.702 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.702 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:16.702 [2024-11-15 12:42:56.996207] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.702 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:16.959 Malloc1 00:21:16.959 [2024-11-15 12:42:57.085828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.959 Malloc2 00:21:16.959 Malloc3 00:21:16.959 Malloc4 00:21:16.959 Malloc5 00:21:16.959 Malloc6 00:21:17.217 Malloc7 00:21:17.217 Malloc8 00:21:17.217 Malloc9 00:21:17.217 Malloc10 00:21:17.217 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.217 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:17.217 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.217 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:17.217 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1071465 00:21:17.217 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:17.217 12:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:17.473 [2024-11-15 12:42:57.595413] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:22.747 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:22.747 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1071289 00:21:22.747 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1071289 ']' 00:21:22.747 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1071289 00:21:22.747 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:22.747 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.747 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1071289 00:21:22.747 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:22.747 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:22.747 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1071289' 00:21:22.747 killing process with pid 1071289 00:21:22.747 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1071289 00:21:22.747 12:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1071289 00:21:22.747 [2024-11-15 12:43:02.584332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8ff0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.584408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8ff0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.584425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8ff0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.584438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8ff0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.584450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8ff0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.584463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8ff0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.584476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8ff0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.588320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758fd0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.588359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758fd0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.588374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758fd0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.588386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758fd0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.588398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758fd0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.588410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758fd0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.588421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758fd0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.588452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758fd0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.588464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758fd0 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.589940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758630 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.589976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758630 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.589993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758630 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.590006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758630 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.590027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758630 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.590042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758630 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.590054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758630 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.590070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758630 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.590082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758630 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.590094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758630 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.590105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758630 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.590117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758630 is same with the state(6) to be set 00:21:22.747 [2024-11-15 12:43:02.590129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1758630 is same with the state(6) to be set 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.747 starting I/O failed: -6 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.747 starting I/O failed: -6 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.747 starting I/O failed: -6 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.747 starting I/O failed: -6 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.747 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 [2024-11-15 12:43:02.592060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756aa0 is same with the state(6) to be set 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 [2024-11-15 12:43:02.592100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756aa0 is same with Write completed with error (sct=0, sc=8) 00:21:22.748 the state(6) to be set 00:21:22.748 [2024-11-15 12:43:02.592117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756aa0 is same with the state(6) to be set 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 [2024-11-15 12:43:02.592129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756aa0 is same with the state(6) to be set 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 [2024-11-15 12:43:02.592150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756aa0 is same with the state(6) to be set 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 [2024-11-15 12:43:02.592163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1756aa0 is same with the state(6) to be set 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 [2024-11-15 12:43:02.592488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 [2024-11-15 12:43:02.593649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 starting I/O failed: -6 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.748 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 [2024-11-15 12:43:02.594744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 [2024-11-15 12:43:02.596368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:22.749 NVMe io qpair process completion error 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 starting I/O failed: -6 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.749 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 [2024-11-15 12:43:02.597665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:22.750 starting I/O failed: -6 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 [2024-11-15 12:43:02.598772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 [2024-11-15 12:43:02.599212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1759970 is same with starting I/O failed: -6 00:21:22.750 the state(6) to be set 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 [2024-11-15 12:43:02.599249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1759970 is same with the state(6) to be set 00:21:22.750 starting I/O failed: -6 00:21:22.750 [2024-11-15 12:43:02.599264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1759970 is same with the state(6) to be set 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 [2024-11-15 12:43:02.599277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1759970 is same with the state(6) to be set 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 [2024-11-15 12:43:02.599290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1759970 is same with the state(6) to be set 00:21:22.750 starting I/O failed: -6 00:21:22.750 [2024-11-15 12:43:02.599302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1759970 is same with the state(6) to be set 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.750 starting I/O failed: -6 00:21:22.750 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 [2024-11-15 12:43:02.599940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 [2024-11-15 12:43:02.601812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:22.751 NVMe io qpair process completion error 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 Write completed with error (sct=0, sc=8) 00:21:22.751 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 [2024-11-15 12:43:02.603178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 [2024-11-15 12:43:02.604229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 [2024-11-15 12:43:02.605402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.752 starting I/O failed: -6 00:21:22.752 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 [2024-11-15 12:43:02.607340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:22.753 NVMe io qpair process completion error 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 [2024-11-15 12:43:02.608639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.753 starting I/O failed: -6 00:21:22.753 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 [2024-11-15 12:43:02.609588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 [2024-11-15 12:43:02.610852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.754 starting I/O failed: -6 00:21:22.754 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 [2024-11-15 12:43:02.612914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:22.755 NVMe io qpair process completion error 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 [2024-11-15 12:43:02.614146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 Write completed with error (sct=0, sc=8) 00:21:22.755 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 [2024-11-15 12:43:02.615112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 [2024-11-15 12:43:02.616321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.756 Write completed with error (sct=0, sc=8) 00:21:22.756 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 [2024-11-15 12:43:02.619048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:22.757 NVMe io qpair process completion error 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 [2024-11-15 12:43:02.620223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 [2024-11-15 12:43:02.621321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 starting I/O failed: -6 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.757 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 [2024-11-15 12:43:02.622513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.758 starting I/O failed: -6 00:21:22.758 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 [2024-11-15 12:43:02.625297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:22.759 NVMe io qpair process completion error 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 [2024-11-15 12:43:02.626519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.759 starting I/O failed: -6 00:21:22.759 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 [2024-11-15 12:43:02.627683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:22.760 starting I/O failed: -6 00:21:22.760 starting I/O failed: -6 00:21:22.760 starting I/O failed: -6 00:21:22.760 starting I/O failed: -6 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 [2024-11-15 12:43:02.629117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.760 Write completed with error (sct=0, sc=8) 00:21:22.760 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 [2024-11-15 12:43:02.630809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:22.761 NVMe io qpair process completion error 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 [2024-11-15 12:43:02.632051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 [2024-11-15 12:43:02.633160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 Write completed with error (sct=0, sc=8) 00:21:22.761 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 [2024-11-15 12:43:02.634353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.762 starting I/O failed: -6 00:21:22.762 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 [2024-11-15 12:43:02.636465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:22.763 NVMe io qpair process completion error 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.763 starting I/O failed: -6 00:21:22.763 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 starting I/O failed: -6 00:21:22.764 Write completed with error (sct=0, sc=8) 00:21:22.764 [2024-11-15 12:43:02.642407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 [2024-11-15 12:43:02.643465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 [2024-11-15 12:43:02.644643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.765 starting I/O failed: -6 00:21:22.765 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 Write completed with error (sct=0, sc=8) 00:21:22.766 starting I/O failed: -6 00:21:22.766 [2024-11-15 12:43:02.648566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:22.766 NVMe io qpair process completion error 00:21:22.766 Initializing NVMe Controllers 00:21:22.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:22.766 Controller IO queue size 128, less than required. 00:21:22.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:22.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:22.766 Controller IO queue size 128, less than required. 00:21:22.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:22.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:22.766 Controller IO queue size 128, less than required. 00:21:22.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:22.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:22.766 Controller IO queue size 128, less than required. 00:21:22.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:22.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:22.766 Controller IO queue size 128, less than required. 00:21:22.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:22.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:22.766 Controller IO queue size 128, less than required. 00:21:22.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:22.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:22.766 Controller IO queue size 128, less than required. 00:21:22.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:22.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:22.766 Controller IO queue size 128, less than required. 00:21:22.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:22.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:22.766 Controller IO queue size 128, less than required. 00:21:22.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:22.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:22.766 Controller IO queue size 128, less than required. 00:21:22.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:22.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:22.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:22.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:22.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:22.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:22.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:22.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:22.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:22.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:22.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:22.766 Initialization complete. Launching workers. 00:21:22.766 ======================================================== 00:21:22.766 Latency(us) 00:21:22.766 Device Information : IOPS MiB/s Average min max 00:21:22.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1883.15 80.92 67992.58 1133.15 116923.90 00:21:22.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1842.10 79.15 69530.57 809.85 152433.56 00:21:22.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1813.38 77.92 70655.39 1002.95 123520.52 00:21:22.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1818.91 78.16 70464.87 778.41 121791.31 00:21:22.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1841.25 79.12 69635.56 965.24 119043.06 00:21:22.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1783.39 76.63 71930.01 975.00 117676.34 00:21:22.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1770.42 76.07 72498.70 878.86 131909.24 00:21:22.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1761.06 75.67 72908.88 1010.49 121362.78 00:21:22.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1803.81 77.51 71196.69 973.37 136876.11 00:21:22.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1809.34 77.75 71005.50 990.42 138566.94 00:21:22.766 ======================================================== 00:21:22.766 Total : 18126.82 778.89 70754.92 778.41 152433.56 00:21:22.766 00:21:22.766 [2024-11-15 12:43:02.653898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1751720 is same with the state(6) to be set 00:21:22.766 [2024-11-15 12:43:02.653990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174fd10 is same with the state(6) to be set 00:21:22.766 [2024-11-15 12:43:02.654050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17502c0 is same with the state(6) to be set 00:21:22.766 [2024-11-15 12:43:02.654108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1750c50 is same with the state(6) to be set 00:21:22.766 [2024-11-15 12:43:02.654187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1751900 is same with the state(6) to be set 00:21:22.766 [2024-11-15 12:43:02.654244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174f9e0 is same with the state(6) to be set 00:21:22.766 [2024-11-15 12:43:02.654300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174f6b0 is same with the state(6) to be set 00:21:22.766 [2024-11-15 12:43:02.654356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1750920 is same with the state(6) to be set 00:21:22.766 [2024-11-15 12:43:02.654415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1751ae0 is same with the state(6) to be set 00:21:22.766 [2024-11-15 12:43:02.654471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17505f0 is same with the state(6) to be set 00:21:22.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:23.092 12:43:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1071465 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1071465 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1071465 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:24.086 rmmod nvme_tcp 00:21:24.086 rmmod nvme_fabrics 00:21:24.086 rmmod nvme_keyring 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1071289 ']' 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1071289 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1071289 ']' 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1071289 00:21:24.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1071289) - No such process 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1071289 is not found' 00:21:24.086 Process with pid 1071289 is not found 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.086 12:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.994 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:25.994 00:21:25.994 real 0m9.719s 00:21:25.994 user 0m23.482s 00:21:25.994 sys 0m5.644s 00:21:25.994 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.994 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:25.994 ************************************ 00:21:25.994 END TEST nvmf_shutdown_tc4 00:21:25.994 ************************************ 00:21:25.994 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:25.994 00:21:25.994 real 0m37.598s 00:21:25.994 user 1m41.480s 00:21:25.994 sys 0m12.026s 00:21:25.994 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.994 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:25.994 ************************************ 00:21:25.994 END TEST nvmf_shutdown 00:21:25.994 ************************************ 00:21:25.994 12:43:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:25.994 12:43:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:25.994 12:43:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.994 12:43:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:25.994 ************************************ 00:21:25.994 START TEST nvmf_nsid 00:21:25.994 ************************************ 00:21:25.994 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:26.253 * Looking for test storage... 00:21:26.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:26.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.253 --rc genhtml_branch_coverage=1 00:21:26.253 --rc genhtml_function_coverage=1 00:21:26.253 --rc genhtml_legend=1 00:21:26.253 --rc geninfo_all_blocks=1 00:21:26.253 --rc geninfo_unexecuted_blocks=1 00:21:26.253 00:21:26.253 ' 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:26.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.253 --rc genhtml_branch_coverage=1 00:21:26.253 --rc genhtml_function_coverage=1 00:21:26.253 --rc genhtml_legend=1 00:21:26.253 --rc geninfo_all_blocks=1 00:21:26.253 --rc geninfo_unexecuted_blocks=1 00:21:26.253 00:21:26.253 ' 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:26.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.253 --rc genhtml_branch_coverage=1 00:21:26.253 --rc genhtml_function_coverage=1 00:21:26.253 --rc genhtml_legend=1 00:21:26.253 --rc geninfo_all_blocks=1 00:21:26.253 --rc geninfo_unexecuted_blocks=1 00:21:26.253 00:21:26.253 ' 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:26.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.253 --rc genhtml_branch_coverage=1 00:21:26.253 --rc genhtml_function_coverage=1 00:21:26.253 --rc genhtml_legend=1 00:21:26.253 --rc geninfo_all_blocks=1 00:21:26.253 --rc geninfo_unexecuted_blocks=1 00:21:26.253 00:21:26.253 ' 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:26.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:26.253 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:28.787 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.787 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:28.788 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:28.788 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:28.788 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:28.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:21:28.788 00:21:28.788 --- 10.0.0.2 ping statistics --- 00:21:28.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.788 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:21:28.788 00:21:28.788 --- 10.0.0.1 ping statistics --- 00:21:28.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.788 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1074213 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1074213 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1074213 ']' 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.788 12:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.788 [2024-11-15 12:43:08.811930] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:21:28.788 [2024-11-15 12:43:08.812040] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.788 [2024-11-15 12:43:08.883016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.788 [2024-11-15 12:43:08.935641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.788 [2024-11-15 12:43:08.935706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.788 [2024-11-15 12:43:08.935743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.788 [2024-11-15 12:43:08.935755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.788 [2024-11-15 12:43:08.935765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.788 [2024-11-15 12:43:08.936348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.788 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.788 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1074239 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=eb4d3744-e752-4913-bffd-7bea9e275ffe 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=83f4ddb8-867d-4934-9f1c-66ed1b110f27 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=ab4420a7-89ae-43be-9dd1-c50e2f8ae182 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.789 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.789 null0 00:21:28.789 null1 00:21:28.789 null2 00:21:28.789 [2024-11-15 12:43:09.106245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.789 [2024-11-15 12:43:09.119437] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:21:28.789 [2024-11-15 12:43:09.119508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1074239 ] 00:21:29.047 [2024-11-15 12:43:09.130463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.047 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.047 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1074239 /var/tmp/tgt2.sock 00:21:29.047 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1074239 ']' 00:21:29.047 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:29.047 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.047 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:29.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:29.047 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.047 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:29.047 [2024-11-15 12:43:09.189797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.047 [2024-11-15 12:43:09.248206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.305 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:29.305 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:29.305 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:29.563 [2024-11-15 12:43:09.903980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.821 [2024-11-15 12:43:09.920120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:29.821 nvme0n1 nvme0n2 00:21:29.821 nvme1n1 00:21:29.821 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:29.821 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:29.821 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.388 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:30.388 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:30.388 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:30.388 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:30.388 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:30.388 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:30.388 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:30.388 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:30.388 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:30.388 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:30.388 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:30.388 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:30.388 12:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid eb4d3744-e752-4913-bffd-7bea9e275ffe 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=eb4d3744e7524913bffd7bea9e275ffe 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EB4D3744E7524913BFFD7BEA9E275FFE 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ EB4D3744E7524913BFFD7BEA9E275FFE == \E\B\4\D\3\7\4\4\E\7\5\2\4\9\1\3\B\F\F\D\7\B\E\A\9\E\2\7\5\F\F\E ]] 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 83f4ddb8-867d-4934-9f1c-66ed1b110f27 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=83f4ddb8867d49349f1c66ed1b110f27 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 83F4DDB8867D49349F1C66ED1B110F27 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 83F4DDB8867D49349F1C66ED1B110F27 == \8\3\F\4\D\D\B\8\8\6\7\D\4\9\3\4\9\F\1\C\6\6\E\D\1\B\1\1\0\F\2\7 ]] 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:31.321 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid ab4420a7-89ae-43be-9dd1-c50e2f8ae182 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ab4420a789ae43be9dd1c50e2f8ae182 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AB4420A789AE43BE9DD1C50E2F8AE182 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ AB4420A789AE43BE9DD1C50E2F8AE182 == \A\B\4\4\2\0\A\7\8\9\A\E\4\3\B\E\9\D\D\1\C\5\0\E\2\F\8\A\E\1\8\2 ]] 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1074239 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1074239 ']' 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1074239 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1074239 00:21:31.580 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:31.838 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:31.838 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1074239' 00:21:31.838 killing process with pid 1074239 00:21:31.838 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1074239 00:21:31.838 12:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1074239 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:32.096 rmmod nvme_tcp 00:21:32.096 rmmod nvme_fabrics 00:21:32.096 rmmod nvme_keyring 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1074213 ']' 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1074213 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1074213 ']' 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1074213 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.096 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1074213 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1074213' 00:21:32.354 killing process with pid 1074213 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1074213 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1074213 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.354 12:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.893 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.893 00:21:34.893 real 0m8.416s 00:21:34.893 user 0m8.203s 00:21:34.893 sys 0m2.748s 00:21:34.893 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.893 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:34.893 ************************************ 00:21:34.893 END TEST nvmf_nsid 00:21:34.893 ************************************ 00:21:34.893 12:43:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:34.893 00:21:34.893 real 11m43.255s 00:21:34.893 user 27m41.733s 00:21:34.893 sys 2m46.561s 00:21:34.893 12:43:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.893 12:43:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.893 ************************************ 00:21:34.893 END TEST nvmf_target_extra 00:21:34.893 ************************************ 00:21:34.893 12:43:14 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:34.893 12:43:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.893 12:43:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.893 12:43:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:34.893 ************************************ 00:21:34.893 START TEST nvmf_host 00:21:34.893 ************************************ 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:34.893 * Looking for test storage... 00:21:34.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.893 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:34.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.894 --rc genhtml_branch_coverage=1 00:21:34.894 --rc genhtml_function_coverage=1 00:21:34.894 --rc genhtml_legend=1 00:21:34.894 --rc geninfo_all_blocks=1 00:21:34.894 --rc geninfo_unexecuted_blocks=1 00:21:34.894 00:21:34.894 ' 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:34.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.894 --rc genhtml_branch_coverage=1 00:21:34.894 --rc genhtml_function_coverage=1 00:21:34.894 --rc genhtml_legend=1 00:21:34.894 --rc geninfo_all_blocks=1 00:21:34.894 --rc geninfo_unexecuted_blocks=1 00:21:34.894 00:21:34.894 ' 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:34.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.894 --rc genhtml_branch_coverage=1 00:21:34.894 --rc genhtml_function_coverage=1 00:21:34.894 --rc genhtml_legend=1 00:21:34.894 --rc geninfo_all_blocks=1 00:21:34.894 --rc geninfo_unexecuted_blocks=1 00:21:34.894 00:21:34.894 ' 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:34.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.894 --rc genhtml_branch_coverage=1 00:21:34.894 --rc genhtml_function_coverage=1 00:21:34.894 --rc genhtml_legend=1 00:21:34.894 --rc geninfo_all_blocks=1 00:21:34.894 --rc geninfo_unexecuted_blocks=1 00:21:34.894 00:21:34.894 ' 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.894 ************************************ 00:21:34.894 START TEST nvmf_multicontroller 00:21:34.894 ************************************ 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:34.894 * Looking for test storage... 00:21:34.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:21:34.894 12:43:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:34.894 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:34.894 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.894 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.894 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.894 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.894 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.894 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.894 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.894 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.894 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.894 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:34.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.895 --rc genhtml_branch_coverage=1 00:21:34.895 --rc genhtml_function_coverage=1 00:21:34.895 --rc genhtml_legend=1 00:21:34.895 --rc geninfo_all_blocks=1 00:21:34.895 --rc geninfo_unexecuted_blocks=1 00:21:34.895 00:21:34.895 ' 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:34.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.895 --rc genhtml_branch_coverage=1 00:21:34.895 --rc genhtml_function_coverage=1 00:21:34.895 --rc genhtml_legend=1 00:21:34.895 --rc geninfo_all_blocks=1 00:21:34.895 --rc geninfo_unexecuted_blocks=1 00:21:34.895 00:21:34.895 ' 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:34.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.895 --rc genhtml_branch_coverage=1 00:21:34.895 --rc genhtml_function_coverage=1 00:21:34.895 --rc genhtml_legend=1 00:21:34.895 --rc geninfo_all_blocks=1 00:21:34.895 --rc geninfo_unexecuted_blocks=1 00:21:34.895 00:21:34.895 ' 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:34.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.895 --rc genhtml_branch_coverage=1 00:21:34.895 --rc genhtml_function_coverage=1 00:21:34.895 --rc genhtml_legend=1 00:21:34.895 --rc geninfo_all_blocks=1 00:21:34.895 --rc geninfo_unexecuted_blocks=1 00:21:34.895 00:21:34.895 ' 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.895 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.896 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.896 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:34.896 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:34.896 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.896 12:43:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:37.430 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:37.430 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.430 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:37.431 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:37.431 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:37.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:21:37.431 00:21:37.431 --- 10.0.0.2 ping statistics --- 00:21:37.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.431 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:21:37.431 00:21:37.431 --- 10.0.0.1 ping statistics --- 00:21:37.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.431 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1076793 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1076793 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1076793 ']' 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.431 [2024-11-15 12:43:17.475268] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:21:37.431 [2024-11-15 12:43:17.475356] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.431 [2024-11-15 12:43:17.547498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:37.431 [2024-11-15 12:43:17.608130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.431 [2024-11-15 12:43:17.608184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.431 [2024-11-15 12:43:17.608213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.431 [2024-11-15 12:43:17.608224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.431 [2024-11-15 12:43:17.608234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.431 [2024-11-15 12:43:17.609863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.431 [2024-11-15 12:43:17.609888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.431 [2024-11-15 12:43:17.609892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:37.431 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.432 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.432 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:37.432 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.432 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.432 [2024-11-15 12:43:17.761982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.432 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.432 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:37.432 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.432 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.692 Malloc0 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.692 [2024-11-15 12:43:17.822052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.692 [2024-11-15 12:43:17.829936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.692 Malloc1 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.692 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1076825 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1076825 /var/tmp/bdevperf.sock 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1076825 ']' 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.693 12:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.951 NVMe0n1 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.951 1 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.951 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.210 request: 00:21:38.210 { 00:21:38.210 "name": "NVMe0", 00:21:38.210 "trtype": "tcp", 00:21:38.210 "traddr": "10.0.0.2", 00:21:38.210 "adrfam": "ipv4", 00:21:38.210 "trsvcid": "4420", 00:21:38.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.210 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:38.210 "hostaddr": "10.0.0.1", 00:21:38.210 "prchk_reftag": false, 00:21:38.210 "prchk_guard": false, 00:21:38.210 "hdgst": false, 00:21:38.210 "ddgst": false, 00:21:38.210 "allow_unrecognized_csi": false, 00:21:38.210 "method": "bdev_nvme_attach_controller", 00:21:38.210 "req_id": 1 00:21:38.210 } 00:21:38.210 Got JSON-RPC error response 00:21:38.210 response: 00:21:38.210 { 00:21:38.210 "code": -114, 00:21:38.210 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:38.210 } 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.210 request: 00:21:38.210 { 00:21:38.210 "name": "NVMe0", 00:21:38.210 "trtype": "tcp", 00:21:38.210 "traddr": "10.0.0.2", 00:21:38.210 "adrfam": "ipv4", 00:21:38.210 "trsvcid": "4420", 00:21:38.210 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:38.210 "hostaddr": "10.0.0.1", 00:21:38.210 "prchk_reftag": false, 00:21:38.210 "prchk_guard": false, 00:21:38.210 "hdgst": false, 00:21:38.210 "ddgst": false, 00:21:38.210 "allow_unrecognized_csi": false, 00:21:38.210 "method": "bdev_nvme_attach_controller", 00:21:38.210 "req_id": 1 00:21:38.210 } 00:21:38.210 Got JSON-RPC error response 00:21:38.210 response: 00:21:38.210 { 00:21:38.210 "code": -114, 00:21:38.210 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:38.210 } 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.210 request: 00:21:38.210 { 00:21:38.210 "name": "NVMe0", 00:21:38.210 "trtype": "tcp", 00:21:38.210 "traddr": "10.0.0.2", 00:21:38.210 "adrfam": "ipv4", 00:21:38.210 "trsvcid": "4420", 00:21:38.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.210 "hostaddr": "10.0.0.1", 00:21:38.210 "prchk_reftag": false, 00:21:38.210 "prchk_guard": false, 00:21:38.210 "hdgst": false, 00:21:38.210 "ddgst": false, 00:21:38.210 "multipath": "disable", 00:21:38.210 "allow_unrecognized_csi": false, 00:21:38.210 "method": "bdev_nvme_attach_controller", 00:21:38.210 "req_id": 1 00:21:38.210 } 00:21:38.210 Got JSON-RPC error response 00:21:38.210 response: 00:21:38.210 { 00:21:38.210 "code": -114, 00:21:38.210 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:38.210 } 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.210 request: 00:21:38.210 { 00:21:38.210 "name": "NVMe0", 00:21:38.210 "trtype": "tcp", 00:21:38.210 "traddr": "10.0.0.2", 00:21:38.210 "adrfam": "ipv4", 00:21:38.210 "trsvcid": "4420", 00:21:38.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.210 "hostaddr": "10.0.0.1", 00:21:38.210 "prchk_reftag": false, 00:21:38.210 "prchk_guard": false, 00:21:38.210 "hdgst": false, 00:21:38.210 "ddgst": false, 00:21:38.210 "multipath": "failover", 00:21:38.210 "allow_unrecognized_csi": false, 00:21:38.210 "method": "bdev_nvme_attach_controller", 00:21:38.210 "req_id": 1 00:21:38.210 } 00:21:38.210 Got JSON-RPC error response 00:21:38.210 response: 00:21:38.210 { 00:21:38.210 "code": -114, 00:21:38.210 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:38.210 } 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:38.210 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:38.211 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.211 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.211 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.211 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.211 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.211 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.211 NVMe0n1 00:21:38.211 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.211 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.211 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.211 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.211 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.211 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:38.211 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.211 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.469 00:21:38.469 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.469 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:38.469 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:38.469 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.469 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.469 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.469 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:38.469 12:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:39.841 { 00:21:39.841 "results": [ 00:21:39.841 { 00:21:39.841 "job": "NVMe0n1", 00:21:39.841 "core_mask": "0x1", 00:21:39.841 "workload": "write", 00:21:39.841 "status": "finished", 00:21:39.841 "queue_depth": 128, 00:21:39.841 "io_size": 4096, 00:21:39.841 "runtime": 1.009723, 00:21:39.841 "iops": 18617.97740568453, 00:21:39.841 "mibps": 72.7264742409552, 00:21:39.841 "io_failed": 0, 00:21:39.841 "io_timeout": 0, 00:21:39.841 "avg_latency_us": 6865.301199551592, 00:21:39.841 "min_latency_us": 3034.074074074074, 00:21:39.841 "max_latency_us": 12233.386666666667 00:21:39.841 } 00:21:39.841 ], 00:21:39.841 "core_count": 1 00:21:39.841 } 00:21:39.841 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:39.841 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.841 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.841 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.841 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:39.841 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1076825 00:21:39.841 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1076825 ']' 00:21:39.841 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1076825 00:21:39.841 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:39.841 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.841 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1076825 00:21:39.842 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:39.842 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:39.842 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1076825' 00:21:39.842 killing process with pid 1076825 00:21:39.842 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1076825 00:21:39.842 12:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1076825 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:39.842 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:39.842 [2024-11-15 12:43:17.935141] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:21:39.842 [2024-11-15 12:43:17.935232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076825 ] 00:21:39.842 [2024-11-15 12:43:18.003093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.842 [2024-11-15 12:43:18.062447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.842 [2024-11-15 12:43:18.721441] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 5d9b2a82-a618-40cf-9009-dc82d1079cf0 already exists 00:21:39.842 [2024-11-15 12:43:18.721480] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:5d9b2a82-a618-40cf-9009-dc82d1079cf0 alias for bdev NVMe1n1 00:21:39.842 [2024-11-15 12:43:18.721511] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:39.842 Running I/O for 1 seconds... 00:21:39.842 18544.00 IOPS, 72.44 MiB/s 00:21:39.842 Latency(us) 00:21:39.842 [2024-11-15T11:43:20.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.842 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:39.842 NVMe0n1 : 1.01 18617.98 72.73 0.00 0.00 6865.30 3034.07 12233.39 00:21:39.842 [2024-11-15T11:43:20.186Z] =================================================================================================================== 00:21:39.842 [2024-11-15T11:43:20.186Z] Total : 18617.98 72.73 0.00 0.00 6865.30 3034.07 12233.39 00:21:39.842 Received shutdown signal, test time was about 1.000000 seconds 00:21:39.842 00:21:39.842 Latency(us) 00:21:39.842 [2024-11-15T11:43:20.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.842 [2024-11-15T11:43:20.186Z] =================================================================================================================== 00:21:39.842 [2024-11-15T11:43:20.186Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.842 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:39.842 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:39.842 rmmod nvme_tcp 00:21:39.842 rmmod nvme_fabrics 00:21:40.100 rmmod nvme_keyring 00:21:40.100 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:40.100 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:40.100 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:40.100 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1076793 ']' 00:21:40.100 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1076793 00:21:40.100 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1076793 ']' 00:21:40.100 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1076793 00:21:40.100 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:40.100 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.100 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1076793 00:21:40.100 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:40.100 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:40.100 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1076793' 00:21:40.100 killing process with pid 1076793 00:21:40.100 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1076793 00:21:40.100 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1076793 00:21:40.359 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:40.359 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:40.359 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:40.359 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:40.359 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:40.359 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:40.359 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:40.359 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:40.359 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:40.359 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.359 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.359 12:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.264 12:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:42.264 00:21:42.264 real 0m7.632s 00:21:42.264 user 0m11.884s 00:21:42.264 sys 0m2.373s 00:21:42.264 12:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:42.264 12:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.264 ************************************ 00:21:42.264 END TEST nvmf_multicontroller 00:21:42.264 ************************************ 00:21:42.264 12:43:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:42.264 12:43:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:42.264 12:43:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:42.264 12:43:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.524 ************************************ 00:21:42.524 START TEST nvmf_aer 00:21:42.524 ************************************ 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:42.524 * Looking for test storage... 00:21:42.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:42.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.524 --rc genhtml_branch_coverage=1 00:21:42.524 --rc genhtml_function_coverage=1 00:21:42.524 --rc genhtml_legend=1 00:21:42.524 --rc geninfo_all_blocks=1 00:21:42.524 --rc geninfo_unexecuted_blocks=1 00:21:42.524 00:21:42.524 ' 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:42.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.524 --rc genhtml_branch_coverage=1 00:21:42.524 --rc genhtml_function_coverage=1 00:21:42.524 --rc genhtml_legend=1 00:21:42.524 --rc geninfo_all_blocks=1 00:21:42.524 --rc geninfo_unexecuted_blocks=1 00:21:42.524 00:21:42.524 ' 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:42.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.524 --rc genhtml_branch_coverage=1 00:21:42.524 --rc genhtml_function_coverage=1 00:21:42.524 --rc genhtml_legend=1 00:21:42.524 --rc geninfo_all_blocks=1 00:21:42.524 --rc geninfo_unexecuted_blocks=1 00:21:42.524 00:21:42.524 ' 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:42.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.524 --rc genhtml_branch_coverage=1 00:21:42.524 --rc genhtml_function_coverage=1 00:21:42.524 --rc genhtml_legend=1 00:21:42.524 --rc geninfo_all_blocks=1 00:21:42.524 --rc geninfo_unexecuted_blocks=1 00:21:42.524 00:21:42.524 ' 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.524 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:42.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:42.525 12:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.060 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.060 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:45.060 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:45.060 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:45.060 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:45.060 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:45.060 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:45.060 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:45.060 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:45.060 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:45.061 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:45.061 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:45.061 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:45.061 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.061 12:43:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:45.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:21:45.061 00:21:45.061 --- 10.0.0.2 ping statistics --- 00:21:45.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.061 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:21:45.061 00:21:45.061 --- 10.0.0.1 ping statistics --- 00:21:45.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.061 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1079043 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1079043 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1079043 ']' 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.061 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.062 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.062 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.062 [2024-11-15 12:43:25.124473] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:21:45.062 [2024-11-15 12:43:25.124548] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.062 [2024-11-15 12:43:25.200615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:45.062 [2024-11-15 12:43:25.263126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.062 [2024-11-15 12:43:25.263176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.062 [2024-11-15 12:43:25.263204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.062 [2024-11-15 12:43:25.263215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.062 [2024-11-15 12:43:25.263225] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.062 [2024-11-15 12:43:25.264928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.062 [2024-11-15 12:43:25.264989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.062 [2024-11-15 12:43:25.265036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.062 [2024-11-15 12:43:25.265040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.062 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.062 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:45.062 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:45.062 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:45.062 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.062 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.062 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:45.062 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.062 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.062 [2024-11-15 12:43:25.402491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.320 Malloc0 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.320 [2024-11-15 12:43:25.472847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.320 [ 00:21:45.320 { 00:21:45.320 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:45.320 "subtype": "Discovery", 00:21:45.320 "listen_addresses": [], 00:21:45.320 "allow_any_host": true, 00:21:45.320 "hosts": [] 00:21:45.320 }, 00:21:45.320 { 00:21:45.320 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.320 "subtype": "NVMe", 00:21:45.320 "listen_addresses": [ 00:21:45.320 { 00:21:45.320 "trtype": "TCP", 00:21:45.320 "adrfam": "IPv4", 00:21:45.320 "traddr": "10.0.0.2", 00:21:45.320 "trsvcid": "4420" 00:21:45.320 } 00:21:45.320 ], 00:21:45.320 "allow_any_host": true, 00:21:45.320 "hosts": [], 00:21:45.320 "serial_number": "SPDK00000000000001", 00:21:45.320 "model_number": "SPDK bdev Controller", 00:21:45.320 "max_namespaces": 2, 00:21:45.320 "min_cntlid": 1, 00:21:45.320 "max_cntlid": 65519, 00:21:45.320 "namespaces": [ 00:21:45.320 { 00:21:45.320 "nsid": 1, 00:21:45.320 "bdev_name": "Malloc0", 00:21:45.320 "name": "Malloc0", 00:21:45.320 "nguid": "2D4623D8E73F4BC39998B6F5E829E9ED", 00:21:45.320 "uuid": "2d4623d8-e73f-4bc3-9998-b6f5e829e9ed" 00:21:45.320 } 00:21:45.320 ] 00:21:45.320 } 00:21:45.320 ] 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:45.320 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1079188 00:21:45.321 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:45.321 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:45.321 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:45.321 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:45.321 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:45.321 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:45.321 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:45.321 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:45.321 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:45.321 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:45.321 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.579 Malloc1 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.579 [ 00:21:45.579 { 00:21:45.579 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:45.579 "subtype": "Discovery", 00:21:45.579 "listen_addresses": [], 00:21:45.579 "allow_any_host": true, 00:21:45.579 "hosts": [] 00:21:45.579 }, 00:21:45.579 { 00:21:45.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.579 "subtype": "NVMe", 00:21:45.579 "listen_addresses": [ 00:21:45.579 { 00:21:45.579 "trtype": "TCP", 00:21:45.579 "adrfam": "IPv4", 00:21:45.579 "traddr": "10.0.0.2", 00:21:45.579 "trsvcid": "4420" 00:21:45.579 } 00:21:45.579 ], 00:21:45.579 "allow_any_host": true, 00:21:45.579 "hosts": [], 00:21:45.579 "serial_number": "SPDK00000000000001", 00:21:45.579 "model_number": "SPDK bdev Controller", 00:21:45.579 "max_namespaces": 2, 00:21:45.579 "min_cntlid": 1, 00:21:45.579 "max_cntlid": 65519, 00:21:45.579 "namespaces": [ 00:21:45.579 { 00:21:45.579 "nsid": 1, 00:21:45.579 "bdev_name": "Malloc0", 00:21:45.579 "name": "Malloc0", 00:21:45.579 "nguid": "2D4623D8E73F4BC39998B6F5E829E9ED", 00:21:45.579 "uuid": "2d4623d8-e73f-4bc3-9998-b6f5e829e9ed" 00:21:45.579 }, 00:21:45.579 { 00:21:45.579 "nsid": 2, 00:21:45.579 "bdev_name": "Malloc1", 00:21:45.579 "name": "Malloc1", 00:21:45.579 "nguid": "445BECCD60FB4B63B28F51214917ADE6", 00:21:45.579 "uuid": "445beccd-60fb-4b63-b28f-51214917ade6" 00:21:45.579 } 00:21:45.579 ] 00:21:45.579 } 00:21:45.579 ] 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1079188 00:21:45.579 Asynchronous Event Request test 00:21:45.579 Attaching to 10.0.0.2 00:21:45.579 Attached to 10.0.0.2 00:21:45.579 Registering asynchronous event callbacks... 00:21:45.579 Starting namespace attribute notice tests for all controllers... 00:21:45.579 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:45.579 aer_cb - Changed Namespace 00:21:45.579 Cleaning up... 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.579 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.837 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.837 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.837 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.837 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:45.837 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.837 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:45.837 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:45.837 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:45.837 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:45.837 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:45.837 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:45.837 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:45.837 12:43:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:45.837 rmmod nvme_tcp 00:21:45.837 rmmod nvme_fabrics 00:21:45.837 rmmod nvme_keyring 00:21:45.837 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:45.837 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:45.837 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:45.837 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1079043 ']' 00:21:45.837 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1079043 00:21:45.837 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1079043 ']' 00:21:45.837 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1079043 00:21:45.837 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:45.837 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.837 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1079043 00:21:45.837 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:45.837 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:45.838 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1079043' 00:21:45.838 killing process with pid 1079043 00:21:45.838 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1079043 00:21:45.838 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1079043 00:21:46.096 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:46.096 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:46.096 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:46.096 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:46.096 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:46.096 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:46.096 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:46.096 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:46.096 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:46.096 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.096 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.096 12:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.002 12:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:48.002 00:21:48.002 real 0m5.692s 00:21:48.002 user 0m4.815s 00:21:48.002 sys 0m2.082s 00:21:48.002 12:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.002 12:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.002 ************************************ 00:21:48.002 END TEST nvmf_aer 00:21:48.002 ************************************ 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.261 ************************************ 00:21:48.261 START TEST nvmf_async_init 00:21:48.261 ************************************ 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:48.261 * Looking for test storage... 00:21:48.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:48.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.261 --rc genhtml_branch_coverage=1 00:21:48.261 --rc genhtml_function_coverage=1 00:21:48.261 --rc genhtml_legend=1 00:21:48.261 --rc geninfo_all_blocks=1 00:21:48.261 --rc geninfo_unexecuted_blocks=1 00:21:48.261 00:21:48.261 ' 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:48.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.261 --rc genhtml_branch_coverage=1 00:21:48.261 --rc genhtml_function_coverage=1 00:21:48.261 --rc genhtml_legend=1 00:21:48.261 --rc geninfo_all_blocks=1 00:21:48.261 --rc geninfo_unexecuted_blocks=1 00:21:48.261 00:21:48.261 ' 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:48.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.261 --rc genhtml_branch_coverage=1 00:21:48.261 --rc genhtml_function_coverage=1 00:21:48.261 --rc genhtml_legend=1 00:21:48.261 --rc geninfo_all_blocks=1 00:21:48.261 --rc geninfo_unexecuted_blocks=1 00:21:48.261 00:21:48.261 ' 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:48.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.261 --rc genhtml_branch_coverage=1 00:21:48.261 --rc genhtml_function_coverage=1 00:21:48.261 --rc genhtml_legend=1 00:21:48.261 --rc geninfo_all_blocks=1 00:21:48.261 --rc geninfo_unexecuted_blocks=1 00:21:48.261 00:21:48.261 ' 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.261 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:48.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a4c6163d56634217a86a88870122564c 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:48.262 12:43:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.794 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.794 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:50.794 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:50.794 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:50.794 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:50.795 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:50.795 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:50.795 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:50.795 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:50.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:21:50.795 00:21:50.795 --- 10.0.0.2 ping statistics --- 00:21:50.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.795 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:21:50.795 00:21:50.795 --- 10.0.0.1 ping statistics --- 00:21:50.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.795 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:50.795 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:50.796 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.796 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.796 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1081130 00:21:50.796 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:50.796 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1081130 00:21:50.796 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1081130 ']' 00:21:50.796 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.796 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.796 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.796 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.796 12:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.796 [2024-11-15 12:43:30.816790] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:21:50.796 [2024-11-15 12:43:30.816873] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.796 [2024-11-15 12:43:30.885381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.796 [2024-11-15 12:43:30.943521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.796 [2024-11-15 12:43:30.943592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.796 [2024-11-15 12:43:30.943606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.796 [2024-11-15 12:43:30.943616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.796 [2024-11-15 12:43:30.943626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.796 [2024-11-15 12:43:30.944293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.796 [2024-11-15 12:43:31.087709] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.796 null0 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a4c6163d56634217a86a88870122564c 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.796 [2024-11-15 12:43:31.127987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.796 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.054 nvme0n1 00:21:51.054 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.054 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:51.054 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.054 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.054 [ 00:21:51.054 { 00:21:51.054 "name": "nvme0n1", 00:21:51.054 "aliases": [ 00:21:51.054 "a4c6163d-5663-4217-a86a-88870122564c" 00:21:51.054 ], 00:21:51.054 "product_name": "NVMe disk", 00:21:51.054 "block_size": 512, 00:21:51.054 "num_blocks": 2097152, 00:21:51.054 "uuid": "a4c6163d-5663-4217-a86a-88870122564c", 00:21:51.054 "numa_id": 0, 00:21:51.054 "assigned_rate_limits": { 00:21:51.054 "rw_ios_per_sec": 0, 00:21:51.054 "rw_mbytes_per_sec": 0, 00:21:51.054 "r_mbytes_per_sec": 0, 00:21:51.054 "w_mbytes_per_sec": 0 00:21:51.054 }, 00:21:51.054 "claimed": false, 00:21:51.054 "zoned": false, 00:21:51.054 "supported_io_types": { 00:21:51.054 "read": true, 00:21:51.054 "write": true, 00:21:51.054 "unmap": false, 00:21:51.054 "flush": true, 00:21:51.054 "reset": true, 00:21:51.054 "nvme_admin": true, 00:21:51.054 "nvme_io": true, 00:21:51.054 "nvme_io_md": false, 00:21:51.054 "write_zeroes": true, 00:21:51.054 "zcopy": false, 00:21:51.054 "get_zone_info": false, 00:21:51.054 "zone_management": false, 00:21:51.054 "zone_append": false, 00:21:51.054 "compare": true, 00:21:51.054 "compare_and_write": true, 00:21:51.054 "abort": true, 00:21:51.054 "seek_hole": false, 00:21:51.054 "seek_data": false, 00:21:51.054 "copy": true, 00:21:51.054 "nvme_iov_md": false 00:21:51.054 }, 00:21:51.054 "memory_domains": [ 00:21:51.054 { 00:21:51.054 "dma_device_id": "system", 00:21:51.054 "dma_device_type": 1 00:21:51.054 } 00:21:51.054 ], 00:21:51.054 "driver_specific": { 00:21:51.054 "nvme": [ 00:21:51.054 { 00:21:51.054 "trid": { 00:21:51.054 "trtype": "TCP", 00:21:51.054 "adrfam": "IPv4", 00:21:51.054 "traddr": "10.0.0.2", 00:21:51.054 "trsvcid": "4420", 00:21:51.054 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:51.054 }, 00:21:51.054 "ctrlr_data": { 00:21:51.054 "cntlid": 1, 00:21:51.054 "vendor_id": "0x8086", 00:21:51.054 "model_number": "SPDK bdev Controller", 00:21:51.054 "serial_number": "00000000000000000000", 00:21:51.054 "firmware_revision": "25.01", 00:21:51.054 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:51.054 "oacs": { 00:21:51.054 "security": 0, 00:21:51.054 "format": 0, 00:21:51.054 "firmware": 0, 00:21:51.054 "ns_manage": 0 00:21:51.054 }, 00:21:51.054 "multi_ctrlr": true, 00:21:51.054 "ana_reporting": false 00:21:51.054 }, 00:21:51.054 "vs": { 00:21:51.054 "nvme_version": "1.3" 00:21:51.054 }, 00:21:51.054 "ns_data": { 00:21:51.054 "id": 1, 00:21:51.054 "can_share": true 00:21:51.054 } 00:21:51.054 } 00:21:51.054 ], 00:21:51.054 "mp_policy": "active_passive" 00:21:51.054 } 00:21:51.054 } 00:21:51.054 ] 00:21:51.055 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.055 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:51.055 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.055 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.055 [2024-11-15 12:43:31.376431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:51.055 [2024-11-15 12:43:31.376517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d26b20 (9): Bad file descriptor 00:21:51.367 [2024-11-15 12:43:31.508855] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:51.367 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.367 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:51.367 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.367 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.367 [ 00:21:51.367 { 00:21:51.367 "name": "nvme0n1", 00:21:51.367 "aliases": [ 00:21:51.367 "a4c6163d-5663-4217-a86a-88870122564c" 00:21:51.367 ], 00:21:51.367 "product_name": "NVMe disk", 00:21:51.367 "block_size": 512, 00:21:51.367 "num_blocks": 2097152, 00:21:51.367 "uuid": "a4c6163d-5663-4217-a86a-88870122564c", 00:21:51.367 "numa_id": 0, 00:21:51.367 "assigned_rate_limits": { 00:21:51.367 "rw_ios_per_sec": 0, 00:21:51.367 "rw_mbytes_per_sec": 0, 00:21:51.367 "r_mbytes_per_sec": 0, 00:21:51.367 "w_mbytes_per_sec": 0 00:21:51.367 }, 00:21:51.367 "claimed": false, 00:21:51.367 "zoned": false, 00:21:51.367 "supported_io_types": { 00:21:51.367 "read": true, 00:21:51.367 "write": true, 00:21:51.367 "unmap": false, 00:21:51.367 "flush": true, 00:21:51.367 "reset": true, 00:21:51.367 "nvme_admin": true, 00:21:51.367 "nvme_io": true, 00:21:51.367 "nvme_io_md": false, 00:21:51.367 "write_zeroes": true, 00:21:51.367 "zcopy": false, 00:21:51.367 "get_zone_info": false, 00:21:51.367 "zone_management": false, 00:21:51.367 "zone_append": false, 00:21:51.367 "compare": true, 00:21:51.367 "compare_and_write": true, 00:21:51.367 "abort": true, 00:21:51.367 "seek_hole": false, 00:21:51.367 "seek_data": false, 00:21:51.367 "copy": true, 00:21:51.367 "nvme_iov_md": false 00:21:51.367 }, 00:21:51.367 "memory_domains": [ 00:21:51.367 { 00:21:51.367 "dma_device_id": "system", 00:21:51.367 "dma_device_type": 1 00:21:51.367 } 00:21:51.367 ], 00:21:51.367 "driver_specific": { 00:21:51.367 "nvme": [ 00:21:51.367 { 00:21:51.367 "trid": { 00:21:51.367 "trtype": "TCP", 00:21:51.367 "adrfam": "IPv4", 00:21:51.367 "traddr": "10.0.0.2", 00:21:51.367 "trsvcid": "4420", 00:21:51.367 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:51.367 }, 00:21:51.367 "ctrlr_data": { 00:21:51.367 "cntlid": 2, 00:21:51.367 "vendor_id": "0x8086", 00:21:51.367 "model_number": "SPDK bdev Controller", 00:21:51.367 "serial_number": "00000000000000000000", 00:21:51.367 "firmware_revision": "25.01", 00:21:51.367 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:51.367 "oacs": { 00:21:51.367 "security": 0, 00:21:51.367 "format": 0, 00:21:51.367 "firmware": 0, 00:21:51.367 "ns_manage": 0 00:21:51.367 }, 00:21:51.367 "multi_ctrlr": true, 00:21:51.367 "ana_reporting": false 00:21:51.367 }, 00:21:51.367 "vs": { 00:21:51.367 "nvme_version": "1.3" 00:21:51.367 }, 00:21:51.367 "ns_data": { 00:21:51.367 "id": 1, 00:21:51.367 "can_share": true 00:21:51.367 } 00:21:51.367 } 00:21:51.367 ], 00:21:51.367 "mp_policy": "active_passive" 00:21:51.367 } 00:21:51.367 } 00:21:51.367 ] 00:21:51.367 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.367 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.367 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.367 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.367 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.367 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:51.367 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.c9B9SGJNkd 00:21:51.367 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.c9B9SGJNkd 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.c9B9SGJNkd 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.368 [2024-11-15 12:43:31.565085] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:51.368 [2024-11-15 12:43:31.565215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.368 [2024-11-15 12:43:31.581108] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.368 nvme0n1 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.368 [ 00:21:51.368 { 00:21:51.368 "name": "nvme0n1", 00:21:51.368 "aliases": [ 00:21:51.368 "a4c6163d-5663-4217-a86a-88870122564c" 00:21:51.368 ], 00:21:51.368 "product_name": "NVMe disk", 00:21:51.368 "block_size": 512, 00:21:51.368 "num_blocks": 2097152, 00:21:51.368 "uuid": "a4c6163d-5663-4217-a86a-88870122564c", 00:21:51.368 "numa_id": 0, 00:21:51.368 "assigned_rate_limits": { 00:21:51.368 "rw_ios_per_sec": 0, 00:21:51.368 "rw_mbytes_per_sec": 0, 00:21:51.368 "r_mbytes_per_sec": 0, 00:21:51.368 "w_mbytes_per_sec": 0 00:21:51.368 }, 00:21:51.368 "claimed": false, 00:21:51.368 "zoned": false, 00:21:51.368 "supported_io_types": { 00:21:51.368 "read": true, 00:21:51.368 "write": true, 00:21:51.368 "unmap": false, 00:21:51.368 "flush": true, 00:21:51.368 "reset": true, 00:21:51.368 "nvme_admin": true, 00:21:51.368 "nvme_io": true, 00:21:51.368 "nvme_io_md": false, 00:21:51.368 "write_zeroes": true, 00:21:51.368 "zcopy": false, 00:21:51.368 "get_zone_info": false, 00:21:51.368 "zone_management": false, 00:21:51.368 "zone_append": false, 00:21:51.368 "compare": true, 00:21:51.368 "compare_and_write": true, 00:21:51.368 "abort": true, 00:21:51.368 "seek_hole": false, 00:21:51.368 "seek_data": false, 00:21:51.368 "copy": true, 00:21:51.368 "nvme_iov_md": false 00:21:51.368 }, 00:21:51.368 "memory_domains": [ 00:21:51.368 { 00:21:51.368 "dma_device_id": "system", 00:21:51.368 "dma_device_type": 1 00:21:51.368 } 00:21:51.368 ], 00:21:51.368 "driver_specific": { 00:21:51.368 "nvme": [ 00:21:51.368 { 00:21:51.368 "trid": { 00:21:51.368 "trtype": "TCP", 00:21:51.368 "adrfam": "IPv4", 00:21:51.368 "traddr": "10.0.0.2", 00:21:51.368 "trsvcid": "4421", 00:21:51.368 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:51.368 }, 00:21:51.368 "ctrlr_data": { 00:21:51.368 "cntlid": 3, 00:21:51.368 "vendor_id": "0x8086", 00:21:51.368 "model_number": "SPDK bdev Controller", 00:21:51.368 "serial_number": "00000000000000000000", 00:21:51.368 "firmware_revision": "25.01", 00:21:51.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:51.368 "oacs": { 00:21:51.368 "security": 0, 00:21:51.368 "format": 0, 00:21:51.368 "firmware": 0, 00:21:51.368 "ns_manage": 0 00:21:51.368 }, 00:21:51.368 "multi_ctrlr": true, 00:21:51.368 "ana_reporting": false 00:21:51.368 }, 00:21:51.368 "vs": { 00:21:51.368 "nvme_version": "1.3" 00:21:51.368 }, 00:21:51.368 "ns_data": { 00:21:51.368 "id": 1, 00:21:51.368 "can_share": true 00:21:51.368 } 00:21:51.368 } 00:21:51.368 ], 00:21:51.368 "mp_policy": "active_passive" 00:21:51.368 } 00:21:51.368 } 00:21:51.368 ] 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.c9B9SGJNkd 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.368 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.368 rmmod nvme_tcp 00:21:51.368 rmmod nvme_fabrics 00:21:51.626 rmmod nvme_keyring 00:21:51.626 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.626 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:51.626 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:51.626 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1081130 ']' 00:21:51.626 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1081130 00:21:51.626 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1081130 ']' 00:21:51.626 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1081130 00:21:51.626 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:51.626 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.626 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1081130 00:21:51.626 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.626 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.626 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1081130' 00:21:51.626 killing process with pid 1081130 00:21:51.626 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1081130 00:21:51.626 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1081130 00:21:51.884 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:51.884 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:51.884 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:51.884 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:51.884 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:51.884 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:51.884 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:51.884 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.884 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.884 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.884 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.884 12:43:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.788 12:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:53.788 00:21:53.788 real 0m5.664s 00:21:53.788 user 0m2.179s 00:21:53.788 sys 0m1.929s 00:21:53.788 12:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.788 12:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:53.788 ************************************ 00:21:53.788 END TEST nvmf_async_init 00:21:53.788 ************************************ 00:21:53.788 12:43:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:53.788 12:43:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:53.788 12:43:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.788 12:43:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.788 ************************************ 00:21:53.788 START TEST dma 00:21:53.788 ************************************ 00:21:53.788 12:43:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:53.788 * Looking for test storage... 00:21:53.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:53.788 12:43:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:54.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.047 --rc genhtml_branch_coverage=1 00:21:54.047 --rc genhtml_function_coverage=1 00:21:54.047 --rc genhtml_legend=1 00:21:54.047 --rc geninfo_all_blocks=1 00:21:54.047 --rc geninfo_unexecuted_blocks=1 00:21:54.047 00:21:54.047 ' 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:54.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.047 --rc genhtml_branch_coverage=1 00:21:54.047 --rc genhtml_function_coverage=1 00:21:54.047 --rc genhtml_legend=1 00:21:54.047 --rc geninfo_all_blocks=1 00:21:54.047 --rc geninfo_unexecuted_blocks=1 00:21:54.047 00:21:54.047 ' 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:54.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.047 --rc genhtml_branch_coverage=1 00:21:54.047 --rc genhtml_function_coverage=1 00:21:54.047 --rc genhtml_legend=1 00:21:54.047 --rc geninfo_all_blocks=1 00:21:54.047 --rc geninfo_unexecuted_blocks=1 00:21:54.047 00:21:54.047 ' 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:54.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.047 --rc genhtml_branch_coverage=1 00:21:54.047 --rc genhtml_function_coverage=1 00:21:54.047 --rc genhtml_legend=1 00:21:54.047 --rc geninfo_all_blocks=1 00:21:54.047 --rc geninfo_unexecuted_blocks=1 00:21:54.047 00:21:54.047 ' 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.047 12:43:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:54.048 00:21:54.048 real 0m0.147s 00:21:54.048 user 0m0.107s 00:21:54.048 sys 0m0.049s 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:54.048 ************************************ 00:21:54.048 END TEST dma 00:21:54.048 ************************************ 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.048 ************************************ 00:21:54.048 START TEST nvmf_identify 00:21:54.048 ************************************ 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:54.048 * Looking for test storage... 00:21:54.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:21:54.048 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:54.307 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:54.307 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.307 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.307 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.307 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.307 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:54.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.308 --rc genhtml_branch_coverage=1 00:21:54.308 --rc genhtml_function_coverage=1 00:21:54.308 --rc genhtml_legend=1 00:21:54.308 --rc geninfo_all_blocks=1 00:21:54.308 --rc geninfo_unexecuted_blocks=1 00:21:54.308 00:21:54.308 ' 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:54.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.308 --rc genhtml_branch_coverage=1 00:21:54.308 --rc genhtml_function_coverage=1 00:21:54.308 --rc genhtml_legend=1 00:21:54.308 --rc geninfo_all_blocks=1 00:21:54.308 --rc geninfo_unexecuted_blocks=1 00:21:54.308 00:21:54.308 ' 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:54.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.308 --rc genhtml_branch_coverage=1 00:21:54.308 --rc genhtml_function_coverage=1 00:21:54.308 --rc genhtml_legend=1 00:21:54.308 --rc geninfo_all_blocks=1 00:21:54.308 --rc geninfo_unexecuted_blocks=1 00:21:54.308 00:21:54.308 ' 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:54.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.308 --rc genhtml_branch_coverage=1 00:21:54.308 --rc genhtml_function_coverage=1 00:21:54.308 --rc genhtml_legend=1 00:21:54.308 --rc geninfo_all_blocks=1 00:21:54.308 --rc geninfo_unexecuted_blocks=1 00:21:54.308 00:21:54.308 ' 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.308 12:43:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:56.211 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:56.211 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.211 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:56.212 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:56.212 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:56.212 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:56.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:21:56.471 00:21:56.471 --- 10.0.0.2 ping statistics --- 00:21:56.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.471 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:21:56.471 00:21:56.471 --- 10.0.0.1 ping statistics --- 00:21:56.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.471 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1083289 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1083289 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1083289 ']' 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.471 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:56.471 [2024-11-15 12:43:36.662658] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:21:56.471 [2024-11-15 12:43:36.662767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.471 [2024-11-15 12:43:36.736912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:56.471 [2024-11-15 12:43:36.796140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.471 [2024-11-15 12:43:36.796195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.471 [2024-11-15 12:43:36.796223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.471 [2024-11-15 12:43:36.796235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.471 [2024-11-15 12:43:36.796244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.471 [2024-11-15 12:43:36.797854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.471 [2024-11-15 12:43:36.797915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.471 [2024-11-15 12:43:36.797965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:56.471 [2024-11-15 12:43:36.797969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:56.730 [2024-11-15 12:43:36.920923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:56.730 Malloc0 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.730 12:43:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:56.730 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.730 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:56.730 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.730 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:56.730 [2024-11-15 12:43:37.010564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.730 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.730 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:56.730 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.730 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:56.730 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.730 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:56.730 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.730 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:56.730 [ 00:21:56.730 { 00:21:56.730 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:56.730 "subtype": "Discovery", 00:21:56.730 "listen_addresses": [ 00:21:56.730 { 00:21:56.730 "trtype": "TCP", 00:21:56.730 "adrfam": "IPv4", 00:21:56.730 "traddr": "10.0.0.2", 00:21:56.730 "trsvcid": "4420" 00:21:56.730 } 00:21:56.730 ], 00:21:56.730 "allow_any_host": true, 00:21:56.730 "hosts": [] 00:21:56.730 }, 00:21:56.730 { 00:21:56.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.730 "subtype": "NVMe", 00:21:56.730 "listen_addresses": [ 00:21:56.730 { 00:21:56.730 "trtype": "TCP", 00:21:56.730 "adrfam": "IPv4", 00:21:56.730 "traddr": "10.0.0.2", 00:21:56.730 "trsvcid": "4420" 00:21:56.730 } 00:21:56.730 ], 00:21:56.730 "allow_any_host": true, 00:21:56.730 "hosts": [], 00:21:56.730 "serial_number": "SPDK00000000000001", 00:21:56.730 "model_number": "SPDK bdev Controller", 00:21:56.730 "max_namespaces": 32, 00:21:56.730 "min_cntlid": 1, 00:21:56.730 "max_cntlid": 65519, 00:21:56.730 "namespaces": [ 00:21:56.730 { 00:21:56.730 "nsid": 1, 00:21:56.730 "bdev_name": "Malloc0", 00:21:56.730 "name": "Malloc0", 00:21:56.730 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:56.730 "eui64": "ABCDEF0123456789", 00:21:56.730 "uuid": "3da8f02e-bc20-4012-a1a9-e1cbfd921da7" 00:21:56.730 } 00:21:56.730 ] 00:21:56.730 } 00:21:56.730 ] 00:21:56.730 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.731 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:56.731 [2024-11-15 12:43:37.049077] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:21:56.731 [2024-11-15 12:43:37.049117] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083418 ] 00:21:56.992 [2024-11-15 12:43:37.096136] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:56.992 [2024-11-15 12:43:37.096195] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:56.992 [2024-11-15 12:43:37.096208] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:56.992 [2024-11-15 12:43:37.096223] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:56.992 [2024-11-15 12:43:37.096240] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:56.992 [2024-11-15 12:43:37.104163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:56.992 [2024-11-15 12:43:37.104226] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a22690 0 00:21:56.992 [2024-11-15 12:43:37.109758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:56.992 [2024-11-15 12:43:37.109781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:56.992 [2024-11-15 12:43:37.109790] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:56.992 [2024-11-15 12:43:37.109801] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:56.992 [2024-11-15 12:43:37.109846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.992 [2024-11-15 12:43:37.109859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.992 [2024-11-15 12:43:37.109867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a22690) 00:21:56.992 [2024-11-15 12:43:37.109884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:56.992 [2024-11-15 12:43:37.109911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84100, cid 0, qid 0 00:21:56.992 [2024-11-15 12:43:37.118738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.992 [2024-11-15 12:43:37.118756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.992 [2024-11-15 12:43:37.118764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.992 [2024-11-15 12:43:37.118771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84100) on tqpair=0x1a22690 00:21:56.992 [2024-11-15 12:43:37.118791] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:56.992 [2024-11-15 12:43:37.118804] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:56.992 [2024-11-15 12:43:37.118814] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:56.992 [2024-11-15 12:43:37.118835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.992 [2024-11-15 12:43:37.118844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.992 [2024-11-15 12:43:37.118851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a22690) 00:21:56.992 [2024-11-15 12:43:37.118862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.992 [2024-11-15 12:43:37.118886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84100, cid 0, qid 0 00:21:56.992 [2024-11-15 12:43:37.119035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.993 [2024-11-15 12:43:37.119050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.993 [2024-11-15 12:43:37.119057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.119064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84100) on tqpair=0x1a22690 00:21:56.993 [2024-11-15 12:43:37.119074] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:56.993 [2024-11-15 12:43:37.119086] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:56.993 [2024-11-15 12:43:37.119099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.119107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.119113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a22690) 00:21:56.993 [2024-11-15 12:43:37.119124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.993 [2024-11-15 12:43:37.119146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84100, cid 0, qid 0 00:21:56.993 [2024-11-15 12:43:37.119233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.993 [2024-11-15 12:43:37.119247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.993 [2024-11-15 12:43:37.119254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.119261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84100) on tqpair=0x1a22690 00:21:56.993 [2024-11-15 12:43:37.119270] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:56.993 [2024-11-15 12:43:37.119284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:56.993 [2024-11-15 12:43:37.119302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.119311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.119318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a22690) 00:21:56.993 [2024-11-15 12:43:37.119328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.993 [2024-11-15 12:43:37.119350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84100, cid 0, qid 0 00:21:56.993 [2024-11-15 12:43:37.119436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.993 [2024-11-15 12:43:37.119449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.993 [2024-11-15 12:43:37.119456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.119462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84100) on tqpair=0x1a22690 00:21:56.993 [2024-11-15 12:43:37.119471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:56.993 [2024-11-15 12:43:37.119487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.119496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.119503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a22690) 00:21:56.993 [2024-11-15 12:43:37.119513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.993 [2024-11-15 12:43:37.119534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84100, cid 0, qid 0 00:21:56.993 [2024-11-15 12:43:37.119637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.993 [2024-11-15 12:43:37.119651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.993 [2024-11-15 12:43:37.119658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.119665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84100) on tqpair=0x1a22690 00:21:56.993 [2024-11-15 12:43:37.119673] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:56.993 [2024-11-15 12:43:37.119682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:56.993 [2024-11-15 12:43:37.119695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:56.993 [2024-11-15 12:43:37.119814] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:56.993 [2024-11-15 12:43:37.119826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:56.993 [2024-11-15 12:43:37.119841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.119849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.119855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a22690) 00:21:56.993 [2024-11-15 12:43:37.119866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.993 [2024-11-15 12:43:37.119903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84100, cid 0, qid 0 00:21:56.993 [2024-11-15 12:43:37.120069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.993 [2024-11-15 12:43:37.120084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.993 [2024-11-15 12:43:37.120091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.120098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84100) on tqpair=0x1a22690 00:21:56.993 [2024-11-15 12:43:37.120113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:56.993 [2024-11-15 12:43:37.120131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.120141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.120147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a22690) 00:21:56.993 [2024-11-15 12:43:37.120158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.993 [2024-11-15 12:43:37.120179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84100, cid 0, qid 0 00:21:56.993 [2024-11-15 12:43:37.120307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.993 [2024-11-15 12:43:37.120321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.993 [2024-11-15 12:43:37.120328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.120335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84100) on tqpair=0x1a22690 00:21:56.993 [2024-11-15 12:43:37.120342] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:56.993 [2024-11-15 12:43:37.120351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:56.993 [2024-11-15 12:43:37.120365] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:56.993 [2024-11-15 12:43:37.120384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:56.993 [2024-11-15 12:43:37.120400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.120408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a22690) 00:21:56.993 [2024-11-15 12:43:37.120419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.993 [2024-11-15 12:43:37.120441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84100, cid 0, qid 0 00:21:56.993 [2024-11-15 12:43:37.120669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.993 [2024-11-15 12:43:37.120685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.993 [2024-11-15 12:43:37.120692] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.120699] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a22690): datao=0, datal=4096, cccid=0 00:21:56.993 [2024-11-15 12:43:37.120712] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a84100) on tqpair(0x1a22690): expected_datao=0, payload_size=4096 00:21:56.993 [2024-11-15 12:43:37.120729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.120742] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.120750] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.120763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.993 [2024-11-15 12:43:37.120772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.993 [2024-11-15 12:43:37.120779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.120786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84100) on tqpair=0x1a22690 00:21:56.993 [2024-11-15 12:43:37.120798] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:56.993 [2024-11-15 12:43:37.120807] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:56.993 [2024-11-15 12:43:37.120814] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:56.993 [2024-11-15 12:43:37.120831] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:56.993 [2024-11-15 12:43:37.120841] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:56.993 [2024-11-15 12:43:37.120849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:56.993 [2024-11-15 12:43:37.120868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:56.993 [2024-11-15 12:43:37.120882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.120890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.993 [2024-11-15 12:43:37.120896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a22690) 00:21:56.993 [2024-11-15 12:43:37.120907] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:56.993 [2024-11-15 12:43:37.120929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84100, cid 0, qid 0 00:21:56.994 [2024-11-15 12:43:37.121108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.994 [2024-11-15 12:43:37.121120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.994 [2024-11-15 12:43:37.121127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84100) on tqpair=0x1a22690 00:21:56.994 [2024-11-15 12:43:37.121145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a22690) 00:21:56.994 [2024-11-15 12:43:37.121169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.994 [2024-11-15 12:43:37.121180] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a22690) 00:21:56.994 [2024-11-15 12:43:37.121201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.994 [2024-11-15 12:43:37.121211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a22690) 00:21:56.994 [2024-11-15 12:43:37.121232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.994 [2024-11-15 12:43:37.121258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a22690) 00:21:56.994 [2024-11-15 12:43:37.121280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.994 [2024-11-15 12:43:37.121289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:56.994 [2024-11-15 12:43:37.121303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:56.994 [2024-11-15 12:43:37.121314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a22690) 00:21:56.994 [2024-11-15 12:43:37.121335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.994 [2024-11-15 12:43:37.121373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84100, cid 0, qid 0 00:21:56.994 [2024-11-15 12:43:37.121384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84280, cid 1, qid 0 00:21:56.994 [2024-11-15 12:43:37.121391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84400, cid 2, qid 0 00:21:56.994 [2024-11-15 12:43:37.121398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84580, cid 3, qid 0 00:21:56.994 [2024-11-15 12:43:37.121405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84700, cid 4, qid 0 00:21:56.994 [2024-11-15 12:43:37.121631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.994 [2024-11-15 12:43:37.121646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.994 [2024-11-15 12:43:37.121653] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84700) on tqpair=0x1a22690 00:21:56.994 [2024-11-15 12:43:37.121674] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:56.994 [2024-11-15 12:43:37.121684] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:56.994 [2024-11-15 12:43:37.121702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a22690) 00:21:56.994 [2024-11-15 12:43:37.121732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.994 [2024-11-15 12:43:37.121754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84700, cid 4, qid 0 00:21:56.994 [2024-11-15 12:43:37.121887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.994 [2024-11-15 12:43:37.121901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.994 [2024-11-15 12:43:37.121908] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121914] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a22690): datao=0, datal=4096, cccid=4 00:21:56.994 [2024-11-15 12:43:37.121922] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a84700) on tqpair(0x1a22690): expected_datao=0, payload_size=4096 00:21:56.994 [2024-11-15 12:43:37.121929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121949] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121960] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.121980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.994 [2024-11-15 12:43:37.121992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.994 [2024-11-15 12:43:37.121999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.122005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84700) on tqpair=0x1a22690 00:21:56.994 [2024-11-15 12:43:37.122024] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:56.994 [2024-11-15 12:43:37.122059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.122070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a22690) 00:21:56.994 [2024-11-15 12:43:37.122081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.994 [2024-11-15 12:43:37.122093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.122104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.122111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a22690) 00:21:56.994 [2024-11-15 12:43:37.122121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.994 [2024-11-15 12:43:37.122147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84700, cid 4, qid 0 00:21:56.994 [2024-11-15 12:43:37.122160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84880, cid 5, qid 0 00:21:56.994 [2024-11-15 12:43:37.122304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.994 [2024-11-15 12:43:37.122318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.994 [2024-11-15 12:43:37.122325] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.122331] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a22690): datao=0, datal=1024, cccid=4 00:21:56.994 [2024-11-15 12:43:37.122339] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a84700) on tqpair(0x1a22690): expected_datao=0, payload_size=1024 00:21:56.994 [2024-11-15 12:43:37.122346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.122356] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.122363] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.122372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.994 [2024-11-15 12:43:37.122380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.994 [2024-11-15 12:43:37.122387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.122393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84880) on tqpair=0x1a22690 00:21:56.994 [2024-11-15 12:43:37.165731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.994 [2024-11-15 12:43:37.165749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.994 [2024-11-15 12:43:37.165756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.165763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84700) on tqpair=0x1a22690 00:21:56.994 [2024-11-15 12:43:37.165781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.165790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a22690) 00:21:56.994 [2024-11-15 12:43:37.165802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.994 [2024-11-15 12:43:37.165832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84700, cid 4, qid 0 00:21:56.994 [2024-11-15 12:43:37.165980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.994 [2024-11-15 12:43:37.165995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.994 [2024-11-15 12:43:37.166002] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.166008] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a22690): datao=0, datal=3072, cccid=4 00:21:56.994 [2024-11-15 12:43:37.166016] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a84700) on tqpair(0x1a22690): expected_datao=0, payload_size=3072 00:21:56.994 [2024-11-15 12:43:37.166023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.166033] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.166041] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.166053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.994 [2024-11-15 12:43:37.166063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.994 [2024-11-15 12:43:37.166069] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.166076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84700) on tqpair=0x1a22690 00:21:56.994 [2024-11-15 12:43:37.166096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.994 [2024-11-15 12:43:37.166106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a22690) 00:21:56.994 [2024-11-15 12:43:37.166117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.995 [2024-11-15 12:43:37.166146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84700, cid 4, qid 0 00:21:56.995 [2024-11-15 12:43:37.166245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.995 [2024-11-15 12:43:37.166259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.995 [2024-11-15 12:43:37.166266] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.995 [2024-11-15 12:43:37.166272] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a22690): datao=0, datal=8, cccid=4 00:21:56.995 [2024-11-15 12:43:37.166280] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a84700) on tqpair(0x1a22690): expected_datao=0, payload_size=8 00:21:56.995 [2024-11-15 12:43:37.166287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.995 [2024-11-15 12:43:37.166297] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.995 [2024-11-15 12:43:37.166304] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.995 [2024-11-15 12:43:37.207880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.995 [2024-11-15 12:43:37.207899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.995 [2024-11-15 12:43:37.207907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.995 [2024-11-15 12:43:37.207914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84700) on tqpair=0x1a22690 00:21:56.995 ===================================================== 00:21:56.995 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:56.995 ===================================================== 00:21:56.995 Controller Capabilities/Features 00:21:56.995 ================================ 00:21:56.995 Vendor ID: 0000 00:21:56.995 Subsystem Vendor ID: 0000 00:21:56.995 Serial Number: .................... 00:21:56.995 Model Number: ........................................ 00:21:56.995 Firmware Version: 25.01 00:21:56.995 Recommended Arb Burst: 0 00:21:56.995 IEEE OUI Identifier: 00 00 00 00:21:56.995 Multi-path I/O 00:21:56.995 May have multiple subsystem ports: No 00:21:56.995 May have multiple controllers: No 00:21:56.995 Associated with SR-IOV VF: No 00:21:56.995 Max Data Transfer Size: 131072 00:21:56.995 Max Number of Namespaces: 0 00:21:56.995 Max Number of I/O Queues: 1024 00:21:56.995 NVMe Specification Version (VS): 1.3 00:21:56.995 NVMe Specification Version (Identify): 1.3 00:21:56.995 Maximum Queue Entries: 128 00:21:56.995 Contiguous Queues Required: Yes 00:21:56.995 Arbitration Mechanisms Supported 00:21:56.995 Weighted Round Robin: Not Supported 00:21:56.995 Vendor Specific: Not Supported 00:21:56.995 Reset Timeout: 15000 ms 00:21:56.995 Doorbell Stride: 4 bytes 00:21:56.995 NVM Subsystem Reset: Not Supported 00:21:56.995 Command Sets Supported 00:21:56.995 NVM Command Set: Supported 00:21:56.995 Boot Partition: Not Supported 00:21:56.995 Memory Page Size Minimum: 4096 bytes 00:21:56.995 Memory Page Size Maximum: 4096 bytes 00:21:56.995 Persistent Memory Region: Not Supported 00:21:56.995 Optional Asynchronous Events Supported 00:21:56.995 Namespace Attribute Notices: Not Supported 00:21:56.995 Firmware Activation Notices: Not Supported 00:21:56.995 ANA Change Notices: Not Supported 00:21:56.995 PLE Aggregate Log Change Notices: Not Supported 00:21:56.995 LBA Status Info Alert Notices: Not Supported 00:21:56.995 EGE Aggregate Log Change Notices: Not Supported 00:21:56.995 Normal NVM Subsystem Shutdown event: Not Supported 00:21:56.995 Zone Descriptor Change Notices: Not Supported 00:21:56.995 Discovery Log Change Notices: Supported 00:21:56.995 Controller Attributes 00:21:56.995 128-bit Host Identifier: Not Supported 00:21:56.995 Non-Operational Permissive Mode: Not Supported 00:21:56.995 NVM Sets: Not Supported 00:21:56.995 Read Recovery Levels: Not Supported 00:21:56.995 Endurance Groups: Not Supported 00:21:56.995 Predictable Latency Mode: Not Supported 00:21:56.995 Traffic Based Keep ALive: Not Supported 00:21:56.995 Namespace Granularity: Not Supported 00:21:56.995 SQ Associations: Not Supported 00:21:56.995 UUID List: Not Supported 00:21:56.995 Multi-Domain Subsystem: Not Supported 00:21:56.995 Fixed Capacity Management: Not Supported 00:21:56.995 Variable Capacity Management: Not Supported 00:21:56.995 Delete Endurance Group: Not Supported 00:21:56.995 Delete NVM Set: Not Supported 00:21:56.995 Extended LBA Formats Supported: Not Supported 00:21:56.995 Flexible Data Placement Supported: Not Supported 00:21:56.995 00:21:56.995 Controller Memory Buffer Support 00:21:56.995 ================================ 00:21:56.995 Supported: No 00:21:56.995 00:21:56.995 Persistent Memory Region Support 00:21:56.995 ================================ 00:21:56.995 Supported: No 00:21:56.995 00:21:56.995 Admin Command Set Attributes 00:21:56.995 ============================ 00:21:56.995 Security Send/Receive: Not Supported 00:21:56.995 Format NVM: Not Supported 00:21:56.995 Firmware Activate/Download: Not Supported 00:21:56.995 Namespace Management: Not Supported 00:21:56.995 Device Self-Test: Not Supported 00:21:56.995 Directives: Not Supported 00:21:56.995 NVMe-MI: Not Supported 00:21:56.995 Virtualization Management: Not Supported 00:21:56.995 Doorbell Buffer Config: Not Supported 00:21:56.995 Get LBA Status Capability: Not Supported 00:21:56.995 Command & Feature Lockdown Capability: Not Supported 00:21:56.995 Abort Command Limit: 1 00:21:56.995 Async Event Request Limit: 4 00:21:56.995 Number of Firmware Slots: N/A 00:21:56.995 Firmware Slot 1 Read-Only: N/A 00:21:56.995 Firmware Activation Without Reset: N/A 00:21:56.995 Multiple Update Detection Support: N/A 00:21:56.995 Firmware Update Granularity: No Information Provided 00:21:56.995 Per-Namespace SMART Log: No 00:21:56.995 Asymmetric Namespace Access Log Page: Not Supported 00:21:56.995 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:56.995 Command Effects Log Page: Not Supported 00:21:56.995 Get Log Page Extended Data: Supported 00:21:56.995 Telemetry Log Pages: Not Supported 00:21:56.995 Persistent Event Log Pages: Not Supported 00:21:56.995 Supported Log Pages Log Page: May Support 00:21:56.995 Commands Supported & Effects Log Page: Not Supported 00:21:56.995 Feature Identifiers & Effects Log Page:May Support 00:21:56.995 NVMe-MI Commands & Effects Log Page: May Support 00:21:56.995 Data Area 4 for Telemetry Log: Not Supported 00:21:56.995 Error Log Page Entries Supported: 128 00:21:56.995 Keep Alive: Not Supported 00:21:56.995 00:21:56.995 NVM Command Set Attributes 00:21:56.995 ========================== 00:21:56.995 Submission Queue Entry Size 00:21:56.995 Max: 1 00:21:56.995 Min: 1 00:21:56.995 Completion Queue Entry Size 00:21:56.995 Max: 1 00:21:56.995 Min: 1 00:21:56.995 Number of Namespaces: 0 00:21:56.995 Compare Command: Not Supported 00:21:56.995 Write Uncorrectable Command: Not Supported 00:21:56.995 Dataset Management Command: Not Supported 00:21:56.995 Write Zeroes Command: Not Supported 00:21:56.995 Set Features Save Field: Not Supported 00:21:56.995 Reservations: Not Supported 00:21:56.995 Timestamp: Not Supported 00:21:56.995 Copy: Not Supported 00:21:56.995 Volatile Write Cache: Not Present 00:21:56.995 Atomic Write Unit (Normal): 1 00:21:56.995 Atomic Write Unit (PFail): 1 00:21:56.995 Atomic Compare & Write Unit: 1 00:21:56.995 Fused Compare & Write: Supported 00:21:56.995 Scatter-Gather List 00:21:56.995 SGL Command Set: Supported 00:21:56.995 SGL Keyed: Supported 00:21:56.995 SGL Bit Bucket Descriptor: Not Supported 00:21:56.995 SGL Metadata Pointer: Not Supported 00:21:56.995 Oversized SGL: Not Supported 00:21:56.995 SGL Metadata Address: Not Supported 00:21:56.995 SGL Offset: Supported 00:21:56.995 Transport SGL Data Block: Not Supported 00:21:56.995 Replay Protected Memory Block: Not Supported 00:21:56.996 00:21:56.996 Firmware Slot Information 00:21:56.996 ========================= 00:21:56.996 Active slot: 0 00:21:56.996 00:21:56.996 00:21:56.996 Error Log 00:21:56.996 ========= 00:21:56.996 00:21:56.996 Active Namespaces 00:21:56.996 ================= 00:21:56.996 Discovery Log Page 00:21:56.996 ================== 00:21:56.996 Generation Counter: 2 00:21:56.996 Number of Records: 2 00:21:56.996 Record Format: 0 00:21:56.996 00:21:56.996 Discovery Log Entry 0 00:21:56.996 ---------------------- 00:21:56.996 Transport Type: 3 (TCP) 00:21:56.996 Address Family: 1 (IPv4) 00:21:56.996 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:56.996 Entry Flags: 00:21:56.996 Duplicate Returned Information: 1 00:21:56.996 Explicit Persistent Connection Support for Discovery: 1 00:21:56.996 Transport Requirements: 00:21:56.996 Secure Channel: Not Required 00:21:56.996 Port ID: 0 (0x0000) 00:21:56.996 Controller ID: 65535 (0xffff) 00:21:56.996 Admin Max SQ Size: 128 00:21:56.996 Transport Service Identifier: 4420 00:21:56.996 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:56.996 Transport Address: 10.0.0.2 00:21:56.996 Discovery Log Entry 1 00:21:56.996 ---------------------- 00:21:56.996 Transport Type: 3 (TCP) 00:21:56.996 Address Family: 1 (IPv4) 00:21:56.996 Subsystem Type: 2 (NVM Subsystem) 00:21:56.996 Entry Flags: 00:21:56.996 Duplicate Returned Information: 0 00:21:56.996 Explicit Persistent Connection Support for Discovery: 0 00:21:56.996 Transport Requirements: 00:21:56.996 Secure Channel: Not Required 00:21:56.996 Port ID: 0 (0x0000) 00:21:56.996 Controller ID: 65535 (0xffff) 00:21:56.996 Admin Max SQ Size: 128 00:21:56.996 Transport Service Identifier: 4420 00:21:56.996 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:56.996 Transport Address: 10.0.0.2 [2024-11-15 12:43:37.208032] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:56.996 [2024-11-15 12:43:37.208053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84100) on tqpair=0x1a22690 00:21:56.996 [2024-11-15 12:43:37.208065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.996 [2024-11-15 12:43:37.208075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84280) on tqpair=0x1a22690 00:21:56.996 [2024-11-15 12:43:37.208082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.996 [2024-11-15 12:43:37.208090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84400) on tqpair=0x1a22690 00:21:56.996 [2024-11-15 12:43:37.208098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.996 [2024-11-15 12:43:37.208106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84580) on tqpair=0x1a22690 00:21:56.996 [2024-11-15 12:43:37.208113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.996 [2024-11-15 12:43:37.208131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.996 [2024-11-15 12:43:37.208140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.996 [2024-11-15 12:43:37.208147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a22690) 00:21:56.996 [2024-11-15 12:43:37.208174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.996 [2024-11-15 12:43:37.208199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84580, cid 3, qid 0 00:21:56.996 [2024-11-15 12:43:37.208330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.996 [2024-11-15 12:43:37.208345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.996 [2024-11-15 12:43:37.208352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.996 [2024-11-15 12:43:37.208359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84580) on tqpair=0x1a22690 00:21:56.996 [2024-11-15 12:43:37.208377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.996 [2024-11-15 12:43:37.208387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.996 [2024-11-15 12:43:37.208394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a22690) 00:21:56.996 [2024-11-15 12:43:37.208405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.996 [2024-11-15 12:43:37.208432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84580, cid 3, qid 0 00:21:56.996 [2024-11-15 12:43:37.208534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.996 [2024-11-15 12:43:37.208545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.996 [2024-11-15 12:43:37.208552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.996 [2024-11-15 12:43:37.208559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84580) on tqpair=0x1a22690 00:21:56.996 [2024-11-15 12:43:37.208567] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:56.996 [2024-11-15 12:43:37.208575] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:56.996 [2024-11-15 12:43:37.208591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.996 [2024-11-15 12:43:37.208600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.996 [2024-11-15 12:43:37.208606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a22690) 00:21:56.996 [2024-11-15 12:43:37.208617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.996 [2024-11-15 12:43:37.208638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84580, cid 3, qid 0 00:21:56.996 [2024-11-15 12:43:37.208767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.996 [2024-11-15 12:43:37.208781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.996 [2024-11-15 12:43:37.208788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.996 [2024-11-15 12:43:37.208795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84580) on tqpair=0x1a22690 00:21:56.996 [2024-11-15 12:43:37.208812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.996 [2024-11-15 12:43:37.208821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.996 [2024-11-15 12:43:37.208827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a22690) 00:21:56.996 [2024-11-15 12:43:37.208838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.996 [2024-11-15 12:43:37.208859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84580, cid 3, qid 0 00:21:56.996 [2024-11-15 12:43:37.208987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.996 [2024-11-15 12:43:37.208999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.996 [2024-11-15 12:43:37.209006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.996 [2024-11-15 12:43:37.209013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84580) on tqpair=0x1a22690 00:21:56.996 [2024-11-15 12:43:37.209029] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.996 [2024-11-15 12:43:37.209038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.996 [2024-11-15 12:43:37.209045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a22690) 00:21:56.996 [2024-11-15 12:43:37.209055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.996 [2024-11-15 12:43:37.209076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84580, cid 3, qid 0 00:21:56.996 [2024-11-15 12:43:37.209161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.997 [2024-11-15 12:43:37.209175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.997 [2024-11-15 12:43:37.209186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.209193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84580) on tqpair=0x1a22690 00:21:56.997 [2024-11-15 12:43:37.209210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.209220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.209226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a22690) 00:21:56.997 [2024-11-15 12:43:37.209237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.997 [2024-11-15 12:43:37.209258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84580, cid 3, qid 0 00:21:56.997 [2024-11-15 12:43:37.209354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.997 [2024-11-15 12:43:37.209368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.997 [2024-11-15 12:43:37.209375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.209382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84580) on tqpair=0x1a22690 00:21:56.997 [2024-11-15 12:43:37.209399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.209408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.209415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a22690) 00:21:56.997 [2024-11-15 12:43:37.209425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.997 [2024-11-15 12:43:37.209446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84580, cid 3, qid 0 00:21:56.997 [2024-11-15 12:43:37.209574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.997 [2024-11-15 12:43:37.209586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.997 [2024-11-15 12:43:37.209593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.209600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84580) on tqpair=0x1a22690 00:21:56.997 [2024-11-15 12:43:37.209616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.209625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.209631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a22690) 00:21:56.997 [2024-11-15 12:43:37.209642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.997 [2024-11-15 12:43:37.209662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84580, cid 3, qid 0 00:21:56.997 [2024-11-15 12:43:37.213735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.997 [2024-11-15 12:43:37.213751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.997 [2024-11-15 12:43:37.213759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.213765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84580) on tqpair=0x1a22690 00:21:56.997 [2024-11-15 12:43:37.213782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.213792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.213798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a22690) 00:21:56.997 [2024-11-15 12:43:37.213809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.997 [2024-11-15 12:43:37.213830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84580, cid 3, qid 0 00:21:56.997 [2024-11-15 12:43:37.213951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.997 [2024-11-15 12:43:37.213966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.997 [2024-11-15 12:43:37.213973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.213984] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84580) on tqpair=0x1a22690 00:21:56.997 [2024-11-15 12:43:37.214004] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:21:56.997 00:21:56.997 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:56.997 [2024-11-15 12:43:37.246571] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:21:56.997 [2024-11-15 12:43:37.246612] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083426 ] 00:21:56.997 [2024-11-15 12:43:37.298604] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:56.997 [2024-11-15 12:43:37.298659] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:56.997 [2024-11-15 12:43:37.298670] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:56.997 [2024-11-15 12:43:37.298684] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:56.997 [2024-11-15 12:43:37.298708] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:56.997 [2024-11-15 12:43:37.302988] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:56.997 [2024-11-15 12:43:37.303040] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f38690 0 00:21:56.997 [2024-11-15 12:43:37.309747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:56.997 [2024-11-15 12:43:37.309767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:56.997 [2024-11-15 12:43:37.309775] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:56.997 [2024-11-15 12:43:37.309781] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:56.997 [2024-11-15 12:43:37.309829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.309842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.309849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38690) 00:21:56.997 [2024-11-15 12:43:37.309863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:56.997 [2024-11-15 12:43:37.309891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a100, cid 0, qid 0 00:21:56.997 [2024-11-15 12:43:37.316732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.997 [2024-11-15 12:43:37.316751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.997 [2024-11-15 12:43:37.316759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.316766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a100) on tqpair=0x1f38690 00:21:56.997 [2024-11-15 12:43:37.316780] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:56.997 [2024-11-15 12:43:37.316791] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:56.997 [2024-11-15 12:43:37.316801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:56.997 [2024-11-15 12:43:37.316820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.316830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.997 [2024-11-15 12:43:37.316837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38690) 00:21:56.997 [2024-11-15 12:43:37.316852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.997 [2024-11-15 12:43:37.316879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a100, cid 0, qid 0 00:21:56.997 [2024-11-15 12:43:37.316996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.997 [2024-11-15 12:43:37.317010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.998 [2024-11-15 12:43:37.317017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.317024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a100) on tqpair=0x1f38690 00:21:56.998 [2024-11-15 12:43:37.317032] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:56.998 [2024-11-15 12:43:37.317046] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:56.998 [2024-11-15 12:43:37.317058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.317066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.317073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38690) 00:21:56.998 [2024-11-15 12:43:37.317084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.998 [2024-11-15 12:43:37.317107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a100, cid 0, qid 0 00:21:56.998 [2024-11-15 12:43:37.317186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.998 [2024-11-15 12:43:37.317200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.998 [2024-11-15 12:43:37.317207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.317213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a100) on tqpair=0x1f38690 00:21:56.998 [2024-11-15 12:43:37.317222] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:56.998 [2024-11-15 12:43:37.317236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:56.998 [2024-11-15 12:43:37.317248] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.317256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.317263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38690) 00:21:56.998 [2024-11-15 12:43:37.317273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.998 [2024-11-15 12:43:37.317295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a100, cid 0, qid 0 00:21:56.998 [2024-11-15 12:43:37.317367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.998 [2024-11-15 12:43:37.317379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.998 [2024-11-15 12:43:37.317386] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.317393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a100) on tqpair=0x1f38690 00:21:56.998 [2024-11-15 12:43:37.317401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:56.998 [2024-11-15 12:43:37.317417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.317427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.317433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38690) 00:21:56.998 [2024-11-15 12:43:37.317444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.998 [2024-11-15 12:43:37.317465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a100, cid 0, qid 0 00:21:56.998 [2024-11-15 12:43:37.317545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.998 [2024-11-15 12:43:37.317559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.998 [2024-11-15 12:43:37.317566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.317573] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a100) on tqpair=0x1f38690 00:21:56.998 [2024-11-15 12:43:37.317580] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:56.998 [2024-11-15 12:43:37.317589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:56.998 [2024-11-15 12:43:37.317602] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:56.998 [2024-11-15 12:43:37.317712] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:56.998 [2024-11-15 12:43:37.317729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:56.998 [2024-11-15 12:43:37.317743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.317751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.317758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38690) 00:21:56.998 [2024-11-15 12:43:37.317769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.998 [2024-11-15 12:43:37.317791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a100, cid 0, qid 0 00:21:56.998 [2024-11-15 12:43:37.317894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.998 [2024-11-15 12:43:37.317906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.998 [2024-11-15 12:43:37.317913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.317920] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a100) on tqpair=0x1f38690 00:21:56.998 [2024-11-15 12:43:37.317928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:56.998 [2024-11-15 12:43:37.317944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.317953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.317960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38690) 00:21:56.998 [2024-11-15 12:43:37.317971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.998 [2024-11-15 12:43:37.317992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a100, cid 0, qid 0 00:21:56.998 [2024-11-15 12:43:37.318063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.998 [2024-11-15 12:43:37.318077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.998 [2024-11-15 12:43:37.318084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.318091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a100) on tqpair=0x1f38690 00:21:56.998 [2024-11-15 12:43:37.318099] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:56.998 [2024-11-15 12:43:37.318107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:56.998 [2024-11-15 12:43:37.318120] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:56.998 [2024-11-15 12:43:37.318135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:56.998 [2024-11-15 12:43:37.318152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.998 [2024-11-15 12:43:37.318161] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38690) 00:21:56.998 [2024-11-15 12:43:37.318173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.998 [2024-11-15 12:43:37.318195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a100, cid 0, qid 0 00:21:56.998 [2024-11-15 12:43:37.318301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.998 [2024-11-15 12:43:37.318313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.999 [2024-11-15 12:43:37.318320] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318326] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38690): datao=0, datal=4096, cccid=0 00:21:56.999 [2024-11-15 12:43:37.318334] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f9a100) on tqpair(0x1f38690): expected_datao=0, payload_size=4096 00:21:56.999 [2024-11-15 12:43:37.318342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318358] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318367] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.999 [2024-11-15 12:43:37.318389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.999 [2024-11-15 12:43:37.318395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318402] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a100) on tqpair=0x1f38690 00:21:56.999 [2024-11-15 12:43:37.318412] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:56.999 [2024-11-15 12:43:37.318421] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:56.999 [2024-11-15 12:43:37.318429] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:56.999 [2024-11-15 12:43:37.318440] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:56.999 [2024-11-15 12:43:37.318449] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:56.999 [2024-11-15 12:43:37.318458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:56.999 [2024-11-15 12:43:37.318478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:56.999 [2024-11-15 12:43:37.318491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38690) 00:21:56.999 [2024-11-15 12:43:37.318517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:56.999 [2024-11-15 12:43:37.318538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a100, cid 0, qid 0 00:21:56.999 [2024-11-15 12:43:37.318620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.999 [2024-11-15 12:43:37.318632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.999 [2024-11-15 12:43:37.318638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a100) on tqpair=0x1f38690 00:21:56.999 [2024-11-15 12:43:37.318655] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f38690) 00:21:56.999 [2024-11-15 12:43:37.318684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.999 [2024-11-15 12:43:37.318695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f38690) 00:21:56.999 [2024-11-15 12:43:37.318725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.999 [2024-11-15 12:43:37.318738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f38690) 00:21:56.999 [2024-11-15 12:43:37.318761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.999 [2024-11-15 12:43:37.318771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38690) 00:21:56.999 [2024-11-15 12:43:37.318793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.999 [2024-11-15 12:43:37.318802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:56.999 [2024-11-15 12:43:37.318817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:56.999 [2024-11-15 12:43:37.318829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.318836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f38690) 00:21:56.999 [2024-11-15 12:43:37.318847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.999 [2024-11-15 12:43:37.318870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a100, cid 0, qid 0 00:21:56.999 [2024-11-15 12:43:37.318881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a280, cid 1, qid 0 00:21:56.999 [2024-11-15 12:43:37.318889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a400, cid 2, qid 0 00:21:56.999 [2024-11-15 12:43:37.318897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a580, cid 3, qid 0 00:21:56.999 [2024-11-15 12:43:37.318905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a700, cid 4, qid 0 00:21:56.999 [2024-11-15 12:43:37.319043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.999 [2024-11-15 12:43:37.319057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.999 [2024-11-15 12:43:37.319064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.319071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a700) on tqpair=0x1f38690 00:21:56.999 [2024-11-15 12:43:37.319083] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:56.999 [2024-11-15 12:43:37.319093] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:56.999 [2024-11-15 12:43:37.319108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:56.999 [2024-11-15 12:43:37.319119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:56.999 [2024-11-15 12:43:37.319133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.319141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.319148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f38690) 00:21:56.999 [2024-11-15 12:43:37.319159] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:56.999 [2024-11-15 12:43:37.319181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a700, cid 4, qid 0 00:21:56.999 [2024-11-15 12:43:37.319289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.999 [2024-11-15 12:43:37.319303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.999 [2024-11-15 12:43:37.319309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.319316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a700) on tqpair=0x1f38690 00:21:56.999 [2024-11-15 12:43:37.319386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:56.999 [2024-11-15 12:43:37.319406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:56.999 [2024-11-15 12:43:37.319421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.319429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f38690) 00:21:56.999 [2024-11-15 12:43:37.319440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.999 [2024-11-15 12:43:37.319462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a700, cid 4, qid 0 00:21:56.999 [2024-11-15 12:43:37.319551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.999 [2024-11-15 12:43:37.319563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.999 [2024-11-15 12:43:37.319569] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.319576] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38690): datao=0, datal=4096, cccid=4 00:21:56.999 [2024-11-15 12:43:37.319584] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f9a700) on tqpair(0x1f38690): expected_datao=0, payload_size=4096 00:21:56.999 [2024-11-15 12:43:37.319591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.319607] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.999 [2024-11-15 12:43:37.319616] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.363732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.260 [2024-11-15 12:43:37.363750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.260 [2024-11-15 12:43:37.363758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.363765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a700) on tqpair=0x1f38690 00:21:57.260 [2024-11-15 12:43:37.363780] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:57.260 [2024-11-15 12:43:37.363802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:57.260 [2024-11-15 12:43:37.363821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:57.260 [2024-11-15 12:43:37.363850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.363858] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f38690) 00:21:57.260 [2024-11-15 12:43:37.363870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.260 [2024-11-15 12:43:37.363899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a700, cid 4, qid 0 00:21:57.260 [2024-11-15 12:43:37.364036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:57.260 [2024-11-15 12:43:37.364051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:57.260 [2024-11-15 12:43:37.364059] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.364065] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38690): datao=0, datal=4096, cccid=4 00:21:57.260 [2024-11-15 12:43:37.364073] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f9a700) on tqpair(0x1f38690): expected_datao=0, payload_size=4096 00:21:57.260 [2024-11-15 12:43:37.364080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.364098] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.364107] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.404818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.260 [2024-11-15 12:43:37.404836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.260 [2024-11-15 12:43:37.404844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.404851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a700) on tqpair=0x1f38690 00:21:57.260 [2024-11-15 12:43:37.404873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:57.260 [2024-11-15 12:43:37.404893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:57.260 [2024-11-15 12:43:37.404908] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.404917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f38690) 00:21:57.260 [2024-11-15 12:43:37.404928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.260 [2024-11-15 12:43:37.404953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a700, cid 4, qid 0 00:21:57.260 [2024-11-15 12:43:37.405048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:57.260 [2024-11-15 12:43:37.405063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:57.260 [2024-11-15 12:43:37.405070] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.405077] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38690): datao=0, datal=4096, cccid=4 00:21:57.260 [2024-11-15 12:43:37.405085] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f9a700) on tqpair(0x1f38690): expected_datao=0, payload_size=4096 00:21:57.260 [2024-11-15 12:43:37.405092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.405109] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.405119] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.448731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.260 [2024-11-15 12:43:37.448749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.260 [2024-11-15 12:43:37.448757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.448779] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a700) on tqpair=0x1f38690 00:21:57.260 [2024-11-15 12:43:37.448793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:57.260 [2024-11-15 12:43:37.448810] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:57.260 [2024-11-15 12:43:37.448826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:57.260 [2024-11-15 12:43:37.448841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:57.260 [2024-11-15 12:43:37.448851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:57.260 [2024-11-15 12:43:37.448859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:57.260 [2024-11-15 12:43:37.448868] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:57.260 [2024-11-15 12:43:37.448876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:57.260 [2024-11-15 12:43:37.448885] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:57.260 [2024-11-15 12:43:37.448905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.448914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f38690) 00:21:57.260 [2024-11-15 12:43:37.448926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.260 [2024-11-15 12:43:37.448938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.260 [2024-11-15 12:43:37.448946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.448952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f38690) 00:21:57.261 [2024-11-15 12:43:37.448962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.261 [2024-11-15 12:43:37.448990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a700, cid 4, qid 0 00:21:57.261 [2024-11-15 12:43:37.449002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a880, cid 5, qid 0 00:21:57.261 [2024-11-15 12:43:37.449096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.261 [2024-11-15 12:43:37.449108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.261 [2024-11-15 12:43:37.449115] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.449121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a700) on tqpair=0x1f38690 00:21:57.261 [2024-11-15 12:43:37.449132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.261 [2024-11-15 12:43:37.449141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.261 [2024-11-15 12:43:37.449148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.449154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a880) on tqpair=0x1f38690 00:21:57.261 [2024-11-15 12:43:37.449170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.449179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f38690) 00:21:57.261 [2024-11-15 12:43:37.449190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.261 [2024-11-15 12:43:37.449212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a880, cid 5, qid 0 00:21:57.261 [2024-11-15 12:43:37.449293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.261 [2024-11-15 12:43:37.449307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.261 [2024-11-15 12:43:37.449314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.449321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a880) on tqpair=0x1f38690 00:21:57.261 [2024-11-15 12:43:37.449337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.449346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f38690) 00:21:57.261 [2024-11-15 12:43:37.449356] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.261 [2024-11-15 12:43:37.449382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a880, cid 5, qid 0 00:21:57.261 [2024-11-15 12:43:37.449455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.261 [2024-11-15 12:43:37.449467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.261 [2024-11-15 12:43:37.449474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.449481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a880) on tqpair=0x1f38690 00:21:57.261 [2024-11-15 12:43:37.449496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.449505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f38690) 00:21:57.261 [2024-11-15 12:43:37.449516] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.261 [2024-11-15 12:43:37.449537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a880, cid 5, qid 0 00:21:57.261 [2024-11-15 12:43:37.449612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.261 [2024-11-15 12:43:37.449624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.261 [2024-11-15 12:43:37.449631] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.449638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a880) on tqpair=0x1f38690 00:21:57.261 [2024-11-15 12:43:37.449662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.449673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f38690) 00:21:57.261 [2024-11-15 12:43:37.449684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.261 [2024-11-15 12:43:37.449697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.449705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f38690) 00:21:57.261 [2024-11-15 12:43:37.449715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.261 [2024-11-15 12:43:37.449737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.449746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1f38690) 00:21:57.261 [2024-11-15 12:43:37.449756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.261 [2024-11-15 12:43:37.449769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.449777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f38690) 00:21:57.261 [2024-11-15 12:43:37.449787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.261 [2024-11-15 12:43:37.449810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a880, cid 5, qid 0 00:21:57.261 [2024-11-15 12:43:37.449821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a700, cid 4, qid 0 00:21:57.261 [2024-11-15 12:43:37.449829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9aa00, cid 6, qid 0 00:21:57.261 [2024-11-15 12:43:37.449837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9ab80, cid 7, qid 0 00:21:57.261 [2024-11-15 12:43:37.450036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:57.261 [2024-11-15 12:43:37.450048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:57.261 [2024-11-15 12:43:37.450055] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.450061] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38690): datao=0, datal=8192, cccid=5 00:21:57.261 [2024-11-15 12:43:37.450073] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f9a880) on tqpair(0x1f38690): expected_datao=0, payload_size=8192 00:21:57.261 [2024-11-15 12:43:37.450082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.450103] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.450113] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.450122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:57.261 [2024-11-15 12:43:37.450131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:57.261 [2024-11-15 12:43:37.450137] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.450144] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38690): datao=0, datal=512, cccid=4 00:21:57.261 [2024-11-15 12:43:37.450152] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f9a700) on tqpair(0x1f38690): expected_datao=0, payload_size=512 00:21:57.261 [2024-11-15 12:43:37.450159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.450169] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.450176] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.450184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:57.261 [2024-11-15 12:43:37.450192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:57.261 [2024-11-15 12:43:37.450199] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.450205] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38690): datao=0, datal=512, cccid=6 00:21:57.261 [2024-11-15 12:43:37.450212] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f9aa00) on tqpair(0x1f38690): expected_datao=0, payload_size=512 00:21:57.261 [2024-11-15 12:43:37.450220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.450229] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.450236] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.450244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:57.261 [2024-11-15 12:43:37.450253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:57.261 [2024-11-15 12:43:37.450259] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.450265] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f38690): datao=0, datal=4096, cccid=7 00:21:57.261 [2024-11-15 12:43:37.450273] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f9ab80) on tqpair(0x1f38690): expected_datao=0, payload_size=4096 00:21:57.261 [2024-11-15 12:43:37.450280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.450290] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.450298] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.490836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.261 [2024-11-15 12:43:37.490854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.261 [2024-11-15 12:43:37.490862] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.490869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a880) on tqpair=0x1f38690 00:21:57.261 [2024-11-15 12:43:37.490892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.261 [2024-11-15 12:43:37.490904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.261 [2024-11-15 12:43:37.490911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.261 [2024-11-15 12:43:37.490918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a700) on tqpair=0x1f38690 00:21:57.261 [2024-11-15 12:43:37.490933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.262 [2024-11-15 12:43:37.490944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.262 [2024-11-15 12:43:37.490954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.262 [2024-11-15 12:43:37.490961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9aa00) on tqpair=0x1f38690 00:21:57.262 [2024-11-15 12:43:37.490972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.262 [2024-11-15 12:43:37.490982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.262 [2024-11-15 12:43:37.490989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.262 [2024-11-15 12:43:37.490995] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9ab80) on tqpair=0x1f38690 00:21:57.262 ===================================================== 00:21:57.262 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:57.262 ===================================================== 00:21:57.262 Controller Capabilities/Features 00:21:57.262 ================================ 00:21:57.262 Vendor ID: 8086 00:21:57.262 Subsystem Vendor ID: 8086 00:21:57.262 Serial Number: SPDK00000000000001 00:21:57.262 Model Number: SPDK bdev Controller 00:21:57.262 Firmware Version: 25.01 00:21:57.262 Recommended Arb Burst: 6 00:21:57.262 IEEE OUI Identifier: e4 d2 5c 00:21:57.262 Multi-path I/O 00:21:57.262 May have multiple subsystem ports: Yes 00:21:57.262 May have multiple controllers: Yes 00:21:57.262 Associated with SR-IOV VF: No 00:21:57.262 Max Data Transfer Size: 131072 00:21:57.262 Max Number of Namespaces: 32 00:21:57.262 Max Number of I/O Queues: 127 00:21:57.262 NVMe Specification Version (VS): 1.3 00:21:57.262 NVMe Specification Version (Identify): 1.3 00:21:57.262 Maximum Queue Entries: 128 00:21:57.262 Contiguous Queues Required: Yes 00:21:57.262 Arbitration Mechanisms Supported 00:21:57.262 Weighted Round Robin: Not Supported 00:21:57.262 Vendor Specific: Not Supported 00:21:57.262 Reset Timeout: 15000 ms 00:21:57.262 Doorbell Stride: 4 bytes 00:21:57.262 NVM Subsystem Reset: Not Supported 00:21:57.262 Command Sets Supported 00:21:57.262 NVM Command Set: Supported 00:21:57.262 Boot Partition: Not Supported 00:21:57.262 Memory Page Size Minimum: 4096 bytes 00:21:57.262 Memory Page Size Maximum: 4096 bytes 00:21:57.262 Persistent Memory Region: Not Supported 00:21:57.262 Optional Asynchronous Events Supported 00:21:57.262 Namespace Attribute Notices: Supported 00:21:57.262 Firmware Activation Notices: Not Supported 00:21:57.262 ANA Change Notices: Not Supported 00:21:57.262 PLE Aggregate Log Change Notices: Not Supported 00:21:57.262 LBA Status Info Alert Notices: Not Supported 00:21:57.262 EGE Aggregate Log Change Notices: Not Supported 00:21:57.262 Normal NVM Subsystem Shutdown event: Not Supported 00:21:57.262 Zone Descriptor Change Notices: Not Supported 00:21:57.262 Discovery Log Change Notices: Not Supported 00:21:57.262 Controller Attributes 00:21:57.262 128-bit Host Identifier: Supported 00:21:57.262 Non-Operational Permissive Mode: Not Supported 00:21:57.262 NVM Sets: Not Supported 00:21:57.262 Read Recovery Levels: Not Supported 00:21:57.262 Endurance Groups: Not Supported 00:21:57.262 Predictable Latency Mode: Not Supported 00:21:57.262 Traffic Based Keep ALive: Not Supported 00:21:57.262 Namespace Granularity: Not Supported 00:21:57.262 SQ Associations: Not Supported 00:21:57.262 UUID List: Not Supported 00:21:57.262 Multi-Domain Subsystem: Not Supported 00:21:57.262 Fixed Capacity Management: Not Supported 00:21:57.262 Variable Capacity Management: Not Supported 00:21:57.262 Delete Endurance Group: Not Supported 00:21:57.262 Delete NVM Set: Not Supported 00:21:57.262 Extended LBA Formats Supported: Not Supported 00:21:57.262 Flexible Data Placement Supported: Not Supported 00:21:57.262 00:21:57.262 Controller Memory Buffer Support 00:21:57.262 ================================ 00:21:57.262 Supported: No 00:21:57.262 00:21:57.262 Persistent Memory Region Support 00:21:57.262 ================================ 00:21:57.262 Supported: No 00:21:57.262 00:21:57.262 Admin Command Set Attributes 00:21:57.262 ============================ 00:21:57.262 Security Send/Receive: Not Supported 00:21:57.262 Format NVM: Not Supported 00:21:57.262 Firmware Activate/Download: Not Supported 00:21:57.262 Namespace Management: Not Supported 00:21:57.262 Device Self-Test: Not Supported 00:21:57.262 Directives: Not Supported 00:21:57.262 NVMe-MI: Not Supported 00:21:57.262 Virtualization Management: Not Supported 00:21:57.262 Doorbell Buffer Config: Not Supported 00:21:57.262 Get LBA Status Capability: Not Supported 00:21:57.262 Command & Feature Lockdown Capability: Not Supported 00:21:57.262 Abort Command Limit: 4 00:21:57.262 Async Event Request Limit: 4 00:21:57.262 Number of Firmware Slots: N/A 00:21:57.262 Firmware Slot 1 Read-Only: N/A 00:21:57.262 Firmware Activation Without Reset: N/A 00:21:57.262 Multiple Update Detection Support: N/A 00:21:57.262 Firmware Update Granularity: No Information Provided 00:21:57.262 Per-Namespace SMART Log: No 00:21:57.262 Asymmetric Namespace Access Log Page: Not Supported 00:21:57.262 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:57.262 Command Effects Log Page: Supported 00:21:57.262 Get Log Page Extended Data: Supported 00:21:57.262 Telemetry Log Pages: Not Supported 00:21:57.262 Persistent Event Log Pages: Not Supported 00:21:57.262 Supported Log Pages Log Page: May Support 00:21:57.262 Commands Supported & Effects Log Page: Not Supported 00:21:57.262 Feature Identifiers & Effects Log Page:May Support 00:21:57.262 NVMe-MI Commands & Effects Log Page: May Support 00:21:57.262 Data Area 4 for Telemetry Log: Not Supported 00:21:57.262 Error Log Page Entries Supported: 128 00:21:57.262 Keep Alive: Supported 00:21:57.262 Keep Alive Granularity: 10000 ms 00:21:57.262 00:21:57.262 NVM Command Set Attributes 00:21:57.262 ========================== 00:21:57.262 Submission Queue Entry Size 00:21:57.262 Max: 64 00:21:57.262 Min: 64 00:21:57.262 Completion Queue Entry Size 00:21:57.262 Max: 16 00:21:57.262 Min: 16 00:21:57.262 Number of Namespaces: 32 00:21:57.262 Compare Command: Supported 00:21:57.262 Write Uncorrectable Command: Not Supported 00:21:57.262 Dataset Management Command: Supported 00:21:57.262 Write Zeroes Command: Supported 00:21:57.262 Set Features Save Field: Not Supported 00:21:57.262 Reservations: Supported 00:21:57.262 Timestamp: Not Supported 00:21:57.262 Copy: Supported 00:21:57.262 Volatile Write Cache: Present 00:21:57.262 Atomic Write Unit (Normal): 1 00:21:57.262 Atomic Write Unit (PFail): 1 00:21:57.262 Atomic Compare & Write Unit: 1 00:21:57.262 Fused Compare & Write: Supported 00:21:57.262 Scatter-Gather List 00:21:57.262 SGL Command Set: Supported 00:21:57.262 SGL Keyed: Supported 00:21:57.262 SGL Bit Bucket Descriptor: Not Supported 00:21:57.262 SGL Metadata Pointer: Not Supported 00:21:57.263 Oversized SGL: Not Supported 00:21:57.263 SGL Metadata Address: Not Supported 00:21:57.263 SGL Offset: Supported 00:21:57.263 Transport SGL Data Block: Not Supported 00:21:57.263 Replay Protected Memory Block: Not Supported 00:21:57.263 00:21:57.263 Firmware Slot Information 00:21:57.263 ========================= 00:21:57.263 Active slot: 1 00:21:57.263 Slot 1 Firmware Revision: 25.01 00:21:57.263 00:21:57.263 00:21:57.263 Commands Supported and Effects 00:21:57.263 ============================== 00:21:57.263 Admin Commands 00:21:57.263 -------------- 00:21:57.263 Get Log Page (02h): Supported 00:21:57.263 Identify (06h): Supported 00:21:57.263 Abort (08h): Supported 00:21:57.263 Set Features (09h): Supported 00:21:57.263 Get Features (0Ah): Supported 00:21:57.263 Asynchronous Event Request (0Ch): Supported 00:21:57.263 Keep Alive (18h): Supported 00:21:57.263 I/O Commands 00:21:57.263 ------------ 00:21:57.263 Flush (00h): Supported LBA-Change 00:21:57.263 Write (01h): Supported LBA-Change 00:21:57.263 Read (02h): Supported 00:21:57.263 Compare (05h): Supported 00:21:57.263 Write Zeroes (08h): Supported LBA-Change 00:21:57.263 Dataset Management (09h): Supported LBA-Change 00:21:57.263 Copy (19h): Supported LBA-Change 00:21:57.263 00:21:57.263 Error Log 00:21:57.263 ========= 00:21:57.263 00:21:57.263 Arbitration 00:21:57.263 =========== 00:21:57.263 Arbitration Burst: 1 00:21:57.263 00:21:57.263 Power Management 00:21:57.263 ================ 00:21:57.263 Number of Power States: 1 00:21:57.263 Current Power State: Power State #0 00:21:57.263 Power State #0: 00:21:57.263 Max Power: 0.00 W 00:21:57.263 Non-Operational State: Operational 00:21:57.263 Entry Latency: Not Reported 00:21:57.263 Exit Latency: Not Reported 00:21:57.263 Relative Read Throughput: 0 00:21:57.263 Relative Read Latency: 0 00:21:57.263 Relative Write Throughput: 0 00:21:57.263 Relative Write Latency: 0 00:21:57.263 Idle Power: Not Reported 00:21:57.263 Active Power: Not Reported 00:21:57.263 Non-Operational Permissive Mode: Not Supported 00:21:57.263 00:21:57.263 Health Information 00:21:57.263 ================== 00:21:57.263 Critical Warnings: 00:21:57.263 Available Spare Space: OK 00:21:57.263 Temperature: OK 00:21:57.263 Device Reliability: OK 00:21:57.263 Read Only: No 00:21:57.263 Volatile Memory Backup: OK 00:21:57.263 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:57.263 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:57.263 Available Spare: 0% 00:21:57.263 Available Spare Threshold: 0% 00:21:57.263 Life Percentage Used:[2024-11-15 12:43:37.491137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.263 [2024-11-15 12:43:37.491150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f38690) 00:21:57.263 [2024-11-15 12:43:37.491161] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.263 [2024-11-15 12:43:37.491186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9ab80, cid 7, qid 0 00:21:57.263 [2024-11-15 12:43:37.491300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.263 [2024-11-15 12:43:37.491313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.263 [2024-11-15 12:43:37.491320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.263 [2024-11-15 12:43:37.491327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9ab80) on tqpair=0x1f38690 00:21:57.263 [2024-11-15 12:43:37.491371] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:57.263 [2024-11-15 12:43:37.491391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a100) on tqpair=0x1f38690 00:21:57.263 [2024-11-15 12:43:37.491402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.263 [2024-11-15 12:43:37.491411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a280) on tqpair=0x1f38690 00:21:57.263 [2024-11-15 12:43:37.491419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.263 [2024-11-15 12:43:37.491427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a400) on tqpair=0x1f38690 00:21:57.263 [2024-11-15 12:43:37.491435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.263 [2024-11-15 12:43:37.491443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a580) on tqpair=0x1f38690 00:21:57.263 [2024-11-15 12:43:37.491451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.263 [2024-11-15 12:43:37.491463] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.263 [2024-11-15 12:43:37.491471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.263 [2024-11-15 12:43:37.491478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38690) 00:21:57.263 [2024-11-15 12:43:37.491489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.263 [2024-11-15 12:43:37.491513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a580, cid 3, qid 0 00:21:57.263 [2024-11-15 12:43:37.491622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.263 [2024-11-15 12:43:37.491636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.263 [2024-11-15 12:43:37.491643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.263 [2024-11-15 12:43:37.491650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a580) on tqpair=0x1f38690 00:21:57.263 [2024-11-15 12:43:37.491661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.263 [2024-11-15 12:43:37.491669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.263 [2024-11-15 12:43:37.491675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38690) 00:21:57.263 [2024-11-15 12:43:37.491690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.263 [2024-11-15 12:43:37.491727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a580, cid 3, qid 0 00:21:57.263 [2024-11-15 12:43:37.491823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.263 [2024-11-15 12:43:37.491837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.263 [2024-11-15 12:43:37.491844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.263 [2024-11-15 12:43:37.491851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a580) on tqpair=0x1f38690 00:21:57.263 [2024-11-15 12:43:37.491859] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:57.263 [2024-11-15 12:43:37.491866] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:57.263 [2024-11-15 12:43:37.491883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.263 [2024-11-15 12:43:37.491892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.263 [2024-11-15 12:43:37.491899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38690) 00:21:57.263 [2024-11-15 12:43:37.491910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.263 [2024-11-15 12:43:37.491932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a580, cid 3, qid 0 00:21:57.263 [2024-11-15 12:43:37.492012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.263 [2024-11-15 12:43:37.492026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.263 [2024-11-15 12:43:37.492033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.263 [2024-11-15 12:43:37.492040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a580) on tqpair=0x1f38690 00:21:57.263 [2024-11-15 12:43:37.492056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.263 [2024-11-15 12:43:37.492065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.263 [2024-11-15 12:43:37.492072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38690) 00:21:57.263 [2024-11-15 12:43:37.492083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.263 [2024-11-15 12:43:37.492104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a580, cid 3, qid 0 00:21:57.263 [2024-11-15 12:43:37.492197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.263 [2024-11-15 12:43:37.492211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.263 [2024-11-15 12:43:37.492218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.492225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a580) on tqpair=0x1f38690 00:21:57.264 [2024-11-15 12:43:37.492241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.492250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.492257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38690) 00:21:57.264 [2024-11-15 12:43:37.492268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.264 [2024-11-15 12:43:37.492289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a580, cid 3, qid 0 00:21:57.264 [2024-11-15 12:43:37.492363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.264 [2024-11-15 12:43:37.492377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.264 [2024-11-15 12:43:37.492384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.492390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a580) on tqpair=0x1f38690 00:21:57.264 [2024-11-15 12:43:37.492407] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.492420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.492427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38690) 00:21:57.264 [2024-11-15 12:43:37.492438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.264 [2024-11-15 12:43:37.492460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a580, cid 3, qid 0 00:21:57.264 [2024-11-15 12:43:37.492534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.264 [2024-11-15 12:43:37.492547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.264 [2024-11-15 12:43:37.492554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.492561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a580) on tqpair=0x1f38690 00:21:57.264 [2024-11-15 12:43:37.492577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.492586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.492593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38690) 00:21:57.264 [2024-11-15 12:43:37.492604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.264 [2024-11-15 12:43:37.492625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a580, cid 3, qid 0 00:21:57.264 [2024-11-15 12:43:37.492699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.264 [2024-11-15 12:43:37.492712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.264 [2024-11-15 12:43:37.492728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.492736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a580) on tqpair=0x1f38690 00:21:57.264 [2024-11-15 12:43:37.492753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.492762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.492769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38690) 00:21:57.264 [2024-11-15 12:43:37.492780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.264 [2024-11-15 12:43:37.492802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a580, cid 3, qid 0 00:21:57.264 [2024-11-15 12:43:37.492873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.264 [2024-11-15 12:43:37.492885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.264 [2024-11-15 12:43:37.492892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.492899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a580) on tqpair=0x1f38690 00:21:57.264 [2024-11-15 12:43:37.492915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.492924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.492930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38690) 00:21:57.264 [2024-11-15 12:43:37.492941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.264 [2024-11-15 12:43:37.492962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a580, cid 3, qid 0 00:21:57.264 [2024-11-15 12:43:37.493032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.264 [2024-11-15 12:43:37.493044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.264 [2024-11-15 12:43:37.493051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.493058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a580) on tqpair=0x1f38690 00:21:57.264 [2024-11-15 12:43:37.493073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.493083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.493095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38690) 00:21:57.264 [2024-11-15 12:43:37.493107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.264 [2024-11-15 12:43:37.493129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a580, cid 3, qid 0 00:21:57.264 [2024-11-15 12:43:37.493202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.264 [2024-11-15 12:43:37.493216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.264 [2024-11-15 12:43:37.493223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.493230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a580) on tqpair=0x1f38690 00:21:57.264 [2024-11-15 12:43:37.493246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.493255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.493262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38690) 00:21:57.264 [2024-11-15 12:43:37.493273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.264 [2024-11-15 12:43:37.493294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a580, cid 3, qid 0 00:21:57.264 [2024-11-15 12:43:37.493387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.264 [2024-11-15 12:43:37.493400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.264 [2024-11-15 12:43:37.493407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.493414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a580) on tqpair=0x1f38690 00:21:57.264 [2024-11-15 12:43:37.493430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.493439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.493446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38690) 00:21:57.264 [2024-11-15 12:43:37.493457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.264 [2024-11-15 12:43:37.493478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a580, cid 3, qid 0 00:21:57.264 [2024-11-15 12:43:37.493552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.264 [2024-11-15 12:43:37.493565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.264 [2024-11-15 12:43:37.493572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.493579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a580) on tqpair=0x1f38690 00:21:57.264 [2024-11-15 12:43:37.493595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.493604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.493611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38690) 00:21:57.264 [2024-11-15 12:43:37.493622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.264 [2024-11-15 12:43:37.493643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a580, cid 3, qid 0 00:21:57.264 [2024-11-15 12:43:37.497730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.264 [2024-11-15 12:43:37.497747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.264 [2024-11-15 12:43:37.497754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.497761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a580) on tqpair=0x1f38690 00:21:57.264 [2024-11-15 12:43:37.497780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.497789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:57.264 [2024-11-15 12:43:37.497796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f38690) 00:21:57.264 [2024-11-15 12:43:37.497811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.264 [2024-11-15 12:43:37.497835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f9a580, cid 3, qid 0 00:21:57.264 [2024-11-15 12:43:37.497950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:57.264 [2024-11-15 12:43:37.497962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:57.265 [2024-11-15 12:43:37.497969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:57.265 [2024-11-15 12:43:37.497976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f9a580) on tqpair=0x1f38690 00:21:57.265 [2024-11-15 12:43:37.497989] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:21:57.265 0% 00:21:57.265 Data Units Read: 0 00:21:57.265 Data Units Written: 0 00:21:57.265 Host Read Commands: 0 00:21:57.265 Host Write Commands: 0 00:21:57.265 Controller Busy Time: 0 minutes 00:21:57.265 Power Cycles: 0 00:21:57.265 Power On Hours: 0 hours 00:21:57.265 Unsafe Shutdowns: 0 00:21:57.265 Unrecoverable Media Errors: 0 00:21:57.265 Lifetime Error Log Entries: 0 00:21:57.265 Warning Temperature Time: 0 minutes 00:21:57.265 Critical Temperature Time: 0 minutes 00:21:57.265 00:21:57.265 Number of Queues 00:21:57.265 ================ 00:21:57.265 Number of I/O Submission Queues: 127 00:21:57.265 Number of I/O Completion Queues: 127 00:21:57.265 00:21:57.265 Active Namespaces 00:21:57.265 ================= 00:21:57.265 Namespace ID:1 00:21:57.265 Error Recovery Timeout: Unlimited 00:21:57.265 Command Set Identifier: NVM (00h) 00:21:57.265 Deallocate: Supported 00:21:57.265 Deallocated/Unwritten Error: Not Supported 00:21:57.265 Deallocated Read Value: Unknown 00:21:57.265 Deallocate in Write Zeroes: Not Supported 00:21:57.265 Deallocated Guard Field: 0xFFFF 00:21:57.265 Flush: Supported 00:21:57.265 Reservation: Supported 00:21:57.265 Namespace Sharing Capabilities: Multiple Controllers 00:21:57.265 Size (in LBAs): 131072 (0GiB) 00:21:57.265 Capacity (in LBAs): 131072 (0GiB) 00:21:57.265 Utilization (in LBAs): 131072 (0GiB) 00:21:57.265 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:57.265 EUI64: ABCDEF0123456789 00:21:57.265 UUID: 3da8f02e-bc20-4012-a1a9-e1cbfd921da7 00:21:57.265 Thin Provisioning: Not Supported 00:21:57.265 Per-NS Atomic Units: Yes 00:21:57.265 Atomic Boundary Size (Normal): 0 00:21:57.265 Atomic Boundary Size (PFail): 0 00:21:57.265 Atomic Boundary Offset: 0 00:21:57.265 Maximum Single Source Range Length: 65535 00:21:57.265 Maximum Copy Length: 65535 00:21:57.265 Maximum Source Range Count: 1 00:21:57.265 NGUID/EUI64 Never Reused: No 00:21:57.265 Namespace Write Protected: No 00:21:57.265 Number of LBA Formats: 1 00:21:57.265 Current LBA Format: LBA Format #00 00:21:57.265 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:57.265 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:57.265 rmmod nvme_tcp 00:21:57.265 rmmod nvme_fabrics 00:21:57.265 rmmod nvme_keyring 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1083289 ']' 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1083289 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1083289 ']' 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1083289 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.265 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1083289 00:21:57.523 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:57.523 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:57.523 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1083289' 00:21:57.523 killing process with pid 1083289 00:21:57.523 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1083289 00:21:57.523 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1083289 00:21:57.523 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:57.523 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:57.523 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:57.523 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:57.523 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:57.523 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:57.523 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:57.782 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:57.782 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:57.782 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.782 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.782 12:43:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.686 12:43:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:59.686 00:21:59.686 real 0m5.633s 00:21:59.686 user 0m4.963s 00:21:59.686 sys 0m1.945s 00:21:59.686 12:43:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.686 12:43:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.686 ************************************ 00:21:59.686 END TEST nvmf_identify 00:21:59.686 ************************************ 00:21:59.686 12:43:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:59.686 12:43:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:59.686 12:43:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.686 12:43:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.686 ************************************ 00:21:59.686 START TEST nvmf_perf 00:21:59.686 ************************************ 00:21:59.686 12:43:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:59.686 * Looking for test storage... 00:21:59.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:59.686 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:59.686 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:59.686 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:59.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.945 --rc genhtml_branch_coverage=1 00:21:59.945 --rc genhtml_function_coverage=1 00:21:59.945 --rc genhtml_legend=1 00:21:59.945 --rc geninfo_all_blocks=1 00:21:59.945 --rc geninfo_unexecuted_blocks=1 00:21:59.945 00:21:59.945 ' 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:59.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.945 --rc genhtml_branch_coverage=1 00:21:59.945 --rc genhtml_function_coverage=1 00:21:59.945 --rc genhtml_legend=1 00:21:59.945 --rc geninfo_all_blocks=1 00:21:59.945 --rc geninfo_unexecuted_blocks=1 00:21:59.945 00:21:59.945 ' 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:59.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.945 --rc genhtml_branch_coverage=1 00:21:59.945 --rc genhtml_function_coverage=1 00:21:59.945 --rc genhtml_legend=1 00:21:59.945 --rc geninfo_all_blocks=1 00:21:59.945 --rc geninfo_unexecuted_blocks=1 00:21:59.945 00:21:59.945 ' 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:59.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.945 --rc genhtml_branch_coverage=1 00:21:59.945 --rc genhtml_function_coverage=1 00:21:59.945 --rc genhtml_legend=1 00:21:59.945 --rc geninfo_all_blocks=1 00:21:59.945 --rc geninfo_unexecuted_blocks=1 00:21:59.945 00:21:59.945 ' 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.945 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:59.946 12:43:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.471 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:02.472 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:02.472 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:02.472 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:02.472 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:22:02.472 00:22:02.472 --- 10.0.0.2 ping statistics --- 00:22:02.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.472 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:22:02.472 00:22:02.472 --- 10.0.0.1 ping statistics --- 00:22:02.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.472 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1085481 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1085481 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1085481 ']' 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:02.472 [2024-11-15 12:43:42.459460] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:22:02.472 [2024-11-15 12:43:42.459549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.472 [2024-11-15 12:43:42.532037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.472 [2024-11-15 12:43:42.590153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.472 [2024-11-15 12:43:42.590204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.472 [2024-11-15 12:43:42.590231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.472 [2024-11-15 12:43:42.590242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.472 [2024-11-15 12:43:42.590258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.472 [2024-11-15 12:43:42.592752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.472 [2024-11-15 12:43:42.592778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.472 [2024-11-15 12:43:42.592841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.472 [2024-11-15 12:43:42.592844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:02.472 12:43:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:05.748 12:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:05.748 12:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:06.005 12:43:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:22:06.005 12:43:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:06.263 12:43:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:06.263 12:43:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:22:06.263 12:43:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:06.263 12:43:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:06.263 12:43:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:06.521 [2024-11-15 12:43:46.658052] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.521 12:43:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:06.778 12:43:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:06.778 12:43:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:07.036 12:43:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:07.036 12:43:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:07.294 12:43:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:07.552 [2024-11-15 12:43:47.738051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.552 12:43:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:07.810 12:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:22:07.810 12:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:07.810 12:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:07.810 12:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:09.184 Initializing NVMe Controllers 00:22:09.184 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:22:09.184 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:22:09.184 Initialization complete. Launching workers. 00:22:09.184 ======================================================== 00:22:09.184 Latency(us) 00:22:09.184 Device Information : IOPS MiB/s Average min max 00:22:09.184 PCIE (0000:88:00.0) NSID 1 from core 0: 84559.03 330.31 377.89 43.28 7512.18 00:22:09.184 ======================================================== 00:22:09.184 Total : 84559.03 330.31 377.89 43.28 7512.18 00:22:09.184 00:22:09.184 12:43:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:10.557 Initializing NVMe Controllers 00:22:10.557 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:10.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:10.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:10.557 Initialization complete. Launching workers. 00:22:10.557 ======================================================== 00:22:10.557 Latency(us) 00:22:10.557 Device Information : IOPS MiB/s Average min max 00:22:10.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 106.00 0.41 9476.59 151.76 45827.17 00:22:10.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16467.11 5000.26 47899.97 00:22:10.557 ======================================================== 00:22:10.557 Total : 167.00 0.65 12030.02 151.76 47899.97 00:22:10.557 00:22:10.557 12:43:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:11.926 Initializing NVMe Controllers 00:22:11.926 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:11.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:11.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:11.926 Initialization complete. Launching workers. 00:22:11.926 ======================================================== 00:22:11.926 Latency(us) 00:22:11.926 Device Information : IOPS MiB/s Average min max 00:22:11.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8333.09 32.55 3840.95 676.06 10903.15 00:22:11.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3803.20 14.86 8445.96 4943.68 16116.87 00:22:11.926 ======================================================== 00:22:11.926 Total : 12136.29 47.41 5284.04 676.06 16116.87 00:22:11.926 00:22:11.926 12:43:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:11.926 12:43:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:11.926 12:43:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:14.453 Initializing NVMe Controllers 00:22:14.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.453 Controller IO queue size 128, less than required. 00:22:14.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:14.453 Controller IO queue size 128, less than required. 00:22:14.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:14.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:14.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:14.453 Initialization complete. Launching workers. 00:22:14.453 ======================================================== 00:22:14.453 Latency(us) 00:22:14.453 Device Information : IOPS MiB/s Average min max 00:22:14.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1732.95 433.24 75111.35 53269.00 111540.25 00:22:14.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 601.98 150.50 228012.89 71222.08 332123.64 00:22:14.453 ======================================================== 00:22:14.453 Total : 2334.93 583.73 114531.79 53269.00 332123.64 00:22:14.453 00:22:14.453 12:43:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:14.711 No valid NVMe controllers or AIO or URING devices found 00:22:14.711 Initializing NVMe Controllers 00:22:14.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.711 Controller IO queue size 128, less than required. 00:22:14.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:14.711 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:14.711 Controller IO queue size 128, less than required. 00:22:14.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:14.711 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:14.711 WARNING: Some requested NVMe devices were skipped 00:22:14.711 12:43:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:17.987 Initializing NVMe Controllers 00:22:17.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:17.987 Controller IO queue size 128, less than required. 00:22:17.987 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.987 Controller IO queue size 128, less than required. 00:22:17.987 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:17.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:17.987 Initialization complete. Launching workers. 00:22:17.987 00:22:17.987 ==================== 00:22:17.987 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:17.987 TCP transport: 00:22:17.987 polls: 8028 00:22:17.987 idle_polls: 5296 00:22:17.987 sock_completions: 2732 00:22:17.987 nvme_completions: 5555 00:22:17.987 submitted_requests: 8352 00:22:17.987 queued_requests: 1 00:22:17.987 00:22:17.987 ==================== 00:22:17.987 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:17.987 TCP transport: 00:22:17.987 polls: 11152 00:22:17.987 idle_polls: 8347 00:22:17.987 sock_completions: 2805 00:22:17.987 nvme_completions: 5563 00:22:17.987 submitted_requests: 8334 00:22:17.987 queued_requests: 1 00:22:17.987 ======================================================== 00:22:17.987 Latency(us) 00:22:17.987 Device Information : IOPS MiB/s Average min max 00:22:17.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1388.41 347.10 94646.26 52888.44 140376.00 00:22:17.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1390.41 347.60 92964.48 48467.31 134349.60 00:22:17.987 ======================================================== 00:22:17.987 Total : 2778.82 694.71 93804.76 48467.31 140376.00 00:22:17.987 00:22:17.987 12:43:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:17.987 12:43:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:17.987 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:17.987 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:17.987 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:17.987 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:17.988 rmmod nvme_tcp 00:22:17.988 rmmod nvme_fabrics 00:22:17.988 rmmod nvme_keyring 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1085481 ']' 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1085481 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1085481 ']' 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1085481 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1085481 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1085481' 00:22:17.988 killing process with pid 1085481 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1085481 00:22:17.988 12:43:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1085481 00:22:19.361 12:43:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:19.361 12:43:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:19.361 12:43:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:19.361 12:43:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:19.361 12:43:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:19.361 12:43:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:19.361 12:43:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:19.361 12:43:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:19.361 12:43:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:19.361 12:43:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.361 12:43:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.361 12:43:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:21.898 00:22:21.898 real 0m21.776s 00:22:21.898 user 1m7.272s 00:22:21.898 sys 0m5.532s 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.898 ************************************ 00:22:21.898 END TEST nvmf_perf 00:22:21.898 ************************************ 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.898 ************************************ 00:22:21.898 START TEST nvmf_fio_host 00:22:21.898 ************************************ 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:21.898 * Looking for test storage... 00:22:21.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:21.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.898 --rc genhtml_branch_coverage=1 00:22:21.898 --rc genhtml_function_coverage=1 00:22:21.898 --rc genhtml_legend=1 00:22:21.898 --rc geninfo_all_blocks=1 00:22:21.898 --rc geninfo_unexecuted_blocks=1 00:22:21.898 00:22:21.898 ' 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:21.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.898 --rc genhtml_branch_coverage=1 00:22:21.898 --rc genhtml_function_coverage=1 00:22:21.898 --rc genhtml_legend=1 00:22:21.898 --rc geninfo_all_blocks=1 00:22:21.898 --rc geninfo_unexecuted_blocks=1 00:22:21.898 00:22:21.898 ' 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:21.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.898 --rc genhtml_branch_coverage=1 00:22:21.898 --rc genhtml_function_coverage=1 00:22:21.898 --rc genhtml_legend=1 00:22:21.898 --rc geninfo_all_blocks=1 00:22:21.898 --rc geninfo_unexecuted_blocks=1 00:22:21.898 00:22:21.898 ' 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:21.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.898 --rc genhtml_branch_coverage=1 00:22:21.898 --rc genhtml_function_coverage=1 00:22:21.898 --rc genhtml_legend=1 00:22:21.898 --rc geninfo_all_blocks=1 00:22:21.898 --rc geninfo_unexecuted_blocks=1 00:22:21.898 00:22:21.898 ' 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.898 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:21.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:21.899 12:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.430 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.430 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:24.430 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:24.430 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:24.430 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:24.430 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:24.430 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:24.430 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:24.430 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:24.430 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:24.430 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:24.430 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:24.430 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:24.430 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:24.431 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:24.431 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:24.431 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:24.431 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:24.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:22:24.431 00:22:24.431 --- 10.0.0.2 ping statistics --- 00:22:24.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.431 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:22:24.431 00:22:24.431 --- 10.0.0.1 ping statistics --- 00:22:24.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.431 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1089457 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1089457 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1089457 ']' 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.431 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.432 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.432 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.432 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.432 [2024-11-15 12:44:04.370421] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:22:24.432 [2024-11-15 12:44:04.370513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.432 [2024-11-15 12:44:04.443302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.432 [2024-11-15 12:44:04.504216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.432 [2024-11-15 12:44:04.504272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.432 [2024-11-15 12:44:04.504301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.432 [2024-11-15 12:44:04.504312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.432 [2024-11-15 12:44:04.504322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.432 [2024-11-15 12:44:04.505947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.432 [2024-11-15 12:44:04.506022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.432 [2024-11-15 12:44:04.506088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.432 [2024-11-15 12:44:04.506091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.432 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.432 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:24.432 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:24.688 [2024-11-15 12:44:04.908329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.688 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:24.688 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:24.688 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.688 12:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:24.946 Malloc1 00:22:24.946 12:44:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:25.203 12:44:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:25.462 12:44:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.719 [2024-11-15 12:44:06.043832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.977 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:26.273 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:26.273 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:26.274 12:44:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:26.274 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:26.274 fio-3.35 00:22:26.274 Starting 1 thread 00:22:28.889 00:22:28.889 test: (groupid=0, jobs=1): err= 0: pid=1089824: Fri Nov 15 12:44:08 2024 00:22:28.889 read: IOPS=8638, BW=33.7MiB/s (35.4MB/s)(69.1MiB/2049msec) 00:22:28.889 slat (nsec): min=1767, max=103439, avg=2305.57, stdev=1346.25 00:22:28.889 clat (usec): min=2389, max=56426, avg=8079.01, stdev=2716.80 00:22:28.889 lat (usec): min=2412, max=56428, avg=8081.32, stdev=2716.78 00:22:28.889 clat percentiles (usec): 00:22:28.889 | 1.00th=[ 6390], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7439], 00:22:28.889 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8094], 00:22:28.889 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:22:28.889 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[53740], 99.95th=[54789], 00:22:28.889 | 99.99th=[56361] 00:22:28.889 bw ( KiB/s): min=34328, max=36000, per=100.00%, avg=35274.00, stdev=707.73, samples=4 00:22:28.889 iops : min= 8582, max= 9000, avg=8818.50, stdev=176.93, samples=4 00:22:28.889 write: IOPS=8652, BW=33.8MiB/s (35.4MB/s)(69.2MiB/2049msec); 0 zone resets 00:22:28.889 slat (nsec): min=1931, max=100240, avg=2420.51, stdev=1207.05 00:22:28.889 clat (usec): min=913, max=54829, avg=6664.76, stdev=2722.46 00:22:28.889 lat (usec): min=919, max=54831, avg=6667.18, stdev=2722.44 00:22:28.889 clat percentiles (usec): 00:22:28.889 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5932], 20.00th=[ 6128], 00:22:28.889 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:22:28.889 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7177], 95.00th=[ 7308], 00:22:28.889 | 99.00th=[ 7701], 99.50th=[ 7898], 99.90th=[52691], 99.95th=[53740], 00:22:28.889 | 99.99th=[54789] 00:22:28.889 bw ( KiB/s): min=35192, max=35712, per=100.00%, avg=35330.00, stdev=255.11, samples=4 00:22:28.889 iops : min= 8798, max= 8928, avg=8832.50, stdev=63.78, samples=4 00:22:28.889 lat (usec) : 1000=0.01% 00:22:28.889 lat (msec) : 2=0.02%, 4=0.12%, 10=99.50%, 50=0.10%, 100=0.25% 00:22:28.889 cpu : usr=63.96%, sys=34.42%, ctx=110, majf=0, minf=36 00:22:28.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:28.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:28.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:28.889 issued rwts: total=17701,17728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:28.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:28.889 00:22:28.889 Run status group 0 (all jobs): 00:22:28.889 READ: bw=33.7MiB/s (35.4MB/s), 33.7MiB/s-33.7MiB/s (35.4MB/s-35.4MB/s), io=69.1MiB (72.5MB), run=2049-2049msec 00:22:28.889 WRITE: bw=33.8MiB/s (35.4MB/s), 33.8MiB/s-33.8MiB/s (35.4MB/s-35.4MB/s), io=69.2MiB (72.6MB), run=2049-2049msec 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:28.889 12:44:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:28.889 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:28.889 fio-3.35 00:22:28.889 Starting 1 thread 00:22:30.792 [2024-11-15 12:44:10.868643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2332270 is same with the state(6) to be set 00:22:30.792 [2024-11-15 12:44:10.868752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2332270 is same with the state(6) to be set 00:22:31.359 00:22:31.359 test: (groupid=0, jobs=1): err= 0: pid=1090157: Fri Nov 15 12:44:11 2024 00:22:31.359 read: IOPS=8258, BW=129MiB/s (135MB/s)(259MiB/2008msec) 00:22:31.359 slat (usec): min=3, max=121, avg= 3.72, stdev= 2.22 00:22:31.359 clat (usec): min=2452, max=17116, avg=8891.20, stdev=2044.32 00:22:31.359 lat (usec): min=2456, max=17120, avg=8894.92, stdev=2044.32 00:22:31.359 clat percentiles (usec): 00:22:31.359 | 1.00th=[ 4686], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7111], 00:22:31.359 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9372], 00:22:31.359 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11469], 95.00th=[12256], 00:22:31.359 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15926], 99.95th=[16712], 00:22:31.359 | 99.99th=[17171] 00:22:31.359 bw ( KiB/s): min=62528, max=74080, per=51.55%, avg=68120.00, stdev=5831.11, samples=4 00:22:31.359 iops : min= 3908, max= 4630, avg=4257.50, stdev=364.44, samples=4 00:22:31.359 write: IOPS=4691, BW=73.3MiB/s (76.9MB/s)(139MiB/1892msec); 0 zone resets 00:22:31.359 slat (usec): min=30, max=202, avg=33.89, stdev= 5.90 00:22:31.359 clat (usec): min=5439, max=19138, avg=11618.42, stdev=2066.38 00:22:31.359 lat (usec): min=5471, max=19169, avg=11652.31, stdev=2066.34 00:22:31.359 clat percentiles (usec): 00:22:31.359 | 1.00th=[ 7439], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:22:31.359 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:22:31.359 | 70.00th=[12649], 80.00th=[13435], 90.00th=[14484], 95.00th=[15139], 00:22:31.359 | 99.00th=[16581], 99.50th=[17695], 99.90th=[19006], 99.95th=[19006], 00:22:31.359 | 99.99th=[19268] 00:22:31.359 bw ( KiB/s): min=65440, max=76256, per=94.01%, avg=70568.00, stdev=5525.07, samples=4 00:22:31.359 iops : min= 4090, max= 4766, avg=4410.50, stdev=345.32, samples=4 00:22:31.359 lat (msec) : 4=0.22%, 10=53.56%, 20=46.22% 00:22:31.359 cpu : usr=78.33%, sys=20.48%, ctx=55, majf=0, minf=52 00:22:31.359 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:31.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:31.359 issued rwts: total=16584,8876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:31.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:31.359 00:22:31.359 Run status group 0 (all jobs): 00:22:31.359 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (272MB), run=2008-2008msec 00:22:31.359 WRITE: bw=73.3MiB/s (76.9MB/s), 73.3MiB/s-73.3MiB/s (76.9MB/s-76.9MB/s), io=139MiB (145MB), run=1892-1892msec 00:22:31.359 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.617 rmmod nvme_tcp 00:22:31.617 rmmod nvme_fabrics 00:22:31.617 rmmod nvme_keyring 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1089457 ']' 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1089457 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1089457 ']' 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1089457 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:31.617 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.618 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1089457 00:22:31.877 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:31.877 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:31.877 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1089457' 00:22:31.877 killing process with pid 1089457 00:22:31.877 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1089457 00:22:31.877 12:44:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1089457 00:22:31.877 12:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.877 12:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.877 12:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.877 12:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:31.877 12:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:31.877 12:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.877 12:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.877 12:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.877 12:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:31.877 12:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.877 12:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.877 12:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:34.421 00:22:34.421 real 0m12.454s 00:22:34.421 user 0m36.462s 00:22:34.421 sys 0m4.125s 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.421 ************************************ 00:22:34.421 END TEST nvmf_fio_host 00:22:34.421 ************************************ 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.421 ************************************ 00:22:34.421 START TEST nvmf_failover 00:22:34.421 ************************************ 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:34.421 * Looking for test storage... 00:22:34.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:34.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.421 --rc genhtml_branch_coverage=1 00:22:34.421 --rc genhtml_function_coverage=1 00:22:34.421 --rc genhtml_legend=1 00:22:34.421 --rc geninfo_all_blocks=1 00:22:34.421 --rc geninfo_unexecuted_blocks=1 00:22:34.421 00:22:34.421 ' 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:34.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.421 --rc genhtml_branch_coverage=1 00:22:34.421 --rc genhtml_function_coverage=1 00:22:34.421 --rc genhtml_legend=1 00:22:34.421 --rc geninfo_all_blocks=1 00:22:34.421 --rc geninfo_unexecuted_blocks=1 00:22:34.421 00:22:34.421 ' 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:34.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.421 --rc genhtml_branch_coverage=1 00:22:34.421 --rc genhtml_function_coverage=1 00:22:34.421 --rc genhtml_legend=1 00:22:34.421 --rc geninfo_all_blocks=1 00:22:34.421 --rc geninfo_unexecuted_blocks=1 00:22:34.421 00:22:34.421 ' 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:34.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.421 --rc genhtml_branch_coverage=1 00:22:34.421 --rc genhtml_function_coverage=1 00:22:34.421 --rc genhtml_legend=1 00:22:34.421 --rc geninfo_all_blocks=1 00:22:34.421 --rc geninfo_unexecuted_blocks=1 00:22:34.421 00:22:34.421 ' 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.421 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:34.422 12:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.329 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:36.330 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:36.330 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:36.330 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:36.330 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:36.330 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:36.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:22:36.591 00:22:36.591 --- 10.0.0.2 ping statistics --- 00:22:36.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.591 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:36.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:22:36.591 00:22:36.591 --- 10.0.0.1 ping statistics --- 00:22:36.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.591 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1092485 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1092485 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1092485 ']' 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.591 12:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:36.591 [2024-11-15 12:44:16.768046] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:22:36.591 [2024-11-15 12:44:16.768155] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.591 [2024-11-15 12:44:16.840583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:36.591 [2024-11-15 12:44:16.902117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.591 [2024-11-15 12:44:16.902167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.591 [2024-11-15 12:44:16.902195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.591 [2024-11-15 12:44:16.902207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.591 [2024-11-15 12:44:16.902216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.591 [2024-11-15 12:44:16.903714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.591 [2024-11-15 12:44:16.903781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.591 [2024-11-15 12:44:16.903785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.850 12:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.850 12:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:36.850 12:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:36.850 12:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:36.850 12:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:36.850 12:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.850 12:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:37.108 [2024-11-15 12:44:17.290640] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.108 12:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:37.367 Malloc0 00:22:37.367 12:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:37.626 12:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:37.884 12:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.142 [2024-11-15 12:44:18.410541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.142 12:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:38.401 [2024-11-15 12:44:18.691357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:38.401 12:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:38.660 [2024-11-15 12:44:18.964367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:38.660 12:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1092776 00:22:38.660 12:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:38.660 12:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:38.660 12:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1092776 /var/tmp/bdevperf.sock 00:22:38.660 12:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1092776 ']' 00:22:38.660 12:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.660 12:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.660 12:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.660 12:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.660 12:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:39.231 12:44:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.231 12:44:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:39.231 12:44:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:39.490 NVMe0n1 00:22:39.490 12:44:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:40.058 00:22:40.058 12:44:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1092908 00:22:40.058 12:44:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:40.058 12:44:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:40.995 12:44:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.255 [2024-11-15 12:44:21.473839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.473910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.473926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.473939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.473952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.473964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.473976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.473988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.255 [2024-11-15 12:44:21.474377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 [2024-11-15 12:44:21.474659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d380 is same with the state(6) to be set 00:22:41.256 12:44:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:44.548 12:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:44.806 00:22:44.806 12:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:45.064 [2024-11-15 12:44:25.241981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255de30 is same with the state(6) to be set 00:22:45.064 [2024-11-15 12:44:25.242046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255de30 is same with the state(6) to be set 00:22:45.064 [2024-11-15 12:44:25.242062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255de30 is same with the state(6) to be set 00:22:45.064 [2024-11-15 12:44:25.242089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255de30 is same with the state(6) to be set 00:22:45.064 [2024-11-15 12:44:25.242102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255de30 is same with the state(6) to be set 00:22:45.064 [2024-11-15 12:44:25.242115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255de30 is same with the state(6) to be set 00:22:45.064 [2024-11-15 12:44:25.242128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255de30 is same with the state(6) to be set 00:22:45.064 [2024-11-15 12:44:25.242140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255de30 is same with the state(6) to be set 00:22:45.064 [2024-11-15 12:44:25.242152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255de30 is same with the state(6) to be set 00:22:45.064 [2024-11-15 12:44:25.242163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255de30 is same with the state(6) to be set 00:22:45.064 [2024-11-15 12:44:25.242175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255de30 is same with the state(6) to be set 00:22:45.064 [2024-11-15 12:44:25.242187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255de30 is same with the state(6) to be set 00:22:45.064 12:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:48.355 12:44:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.355 [2024-11-15 12:44:28.570153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.355 12:44:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:49.293 12:44:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:49.552 [2024-11-15 12:44:29.846556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.846989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.552 [2024-11-15 12:44:29.847461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.553 [2024-11-15 12:44:29.847473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.553 [2024-11-15 12:44:29.847485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2423220 is same with the state(6) to be set 00:22:49.553 12:44:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1092908 00:22:56.134 { 00:22:56.134 "results": [ 00:22:56.134 { 00:22:56.134 "job": "NVMe0n1", 00:22:56.134 "core_mask": "0x1", 00:22:56.134 "workload": "verify", 00:22:56.134 "status": "finished", 00:22:56.134 "verify_range": { 00:22:56.134 "start": 0, 00:22:56.134 "length": 16384 00:22:56.134 }, 00:22:56.134 "queue_depth": 128, 00:22:56.134 "io_size": 4096, 00:22:56.134 "runtime": 15.009235, 00:22:56.134 "iops": 8278.703078471355, 00:22:56.134 "mibps": 32.33868390027873, 00:22:56.134 "io_failed": 12756, 00:22:56.134 "io_timeout": 0, 00:22:56.134 "avg_latency_us": 13989.885209248865, 00:22:56.134 "min_latency_us": 540.0651851851852, 00:22:56.134 "max_latency_us": 19612.254814814816 00:22:56.134 } 00:22:56.134 ], 00:22:56.134 "core_count": 1 00:22:56.134 } 00:22:56.134 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1092776 00:22:56.134 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1092776 ']' 00:22:56.134 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1092776 00:22:56.134 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:56.134 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.134 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1092776 00:22:56.134 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:56.134 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:56.134 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1092776' 00:22:56.134 killing process with pid 1092776 00:22:56.134 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1092776 00:22:56.135 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1092776 00:22:56.135 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:56.135 [2024-11-15 12:44:19.032590] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:22:56.135 [2024-11-15 12:44:19.032677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092776 ] 00:22:56.135 [2024-11-15 12:44:19.100740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.135 [2024-11-15 12:44:19.160488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.135 Running I/O for 15 seconds... 00:22:56.135 8284.00 IOPS, 32.36 MiB/s [2024-11-15T11:44:36.479Z] [2024-11-15 12:44:21.476599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.476644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.476675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.476691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.476709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.476734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.476751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.476766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.476781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.476795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.476810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.476825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.476840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.476853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.476868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.476882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.476898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.476912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.476928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.476942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.476957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.476971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.476993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.135 [2024-11-15 12:44:21.477687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.135 [2024-11-15 12:44:21.477711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.477732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.477749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.477763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.477778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.477792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.477807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.477821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.477836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.477850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.477864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.477878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.477892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.477906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.477920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.477934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.477948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.477962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.477976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.477989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.136 [2024-11-15 12:44:21.478045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.136 [2024-11-15 12:44:21.478505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.136 [2024-11-15 12:44:21.478534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.136 [2024-11-15 12:44:21.478562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.136 [2024-11-15 12:44:21.478590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.136 [2024-11-15 12:44:21.478619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.136 [2024-11-15 12:44:21.478647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.136 [2024-11-15 12:44:21.478675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.136 [2024-11-15 12:44:21.478703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.136 [2024-11-15 12:44:21.478741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.136 [2024-11-15 12:44:21.478771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.136 [2024-11-15 12:44:21.478798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.136 [2024-11-15 12:44:21.478832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.136 [2024-11-15 12:44:21.478847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.478860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.478875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.478889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.478904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.478918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.478932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.478946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.478960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.478974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.478989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.137 [2024-11-15 12:44:21.479975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.137 [2024-11-15 12:44:21.479988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.138 [2024-11-15 12:44:21.480016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.138 [2024-11-15 12:44:21.480045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.138 [2024-11-15 12:44:21.480072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.138 [2024-11-15 12:44:21.480100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.138 [2024-11-15 12:44:21.480128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.138 [2024-11-15 12:44:21.480156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.138 [2024-11-15 12:44:21.480184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.138 [2024-11-15 12:44:21.480217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.138 [2024-11-15 12:44:21.480246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.138 [2024-11-15 12:44:21.480278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.138 [2024-11-15 12:44:21.480307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.138 [2024-11-15 12:44:21.480334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.138 [2024-11-15 12:44:21.480385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.138 [2024-11-15 12:44:21.480396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77128 len:8 PRP1 0x0 PRP2 0x0 00:22:56.138 [2024-11-15 12:44:21.480409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480486] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:56.138 [2024-11-15 12:44:21.480526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.138 [2024-11-15 12:44:21.480545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.138 [2024-11-15 12:44:21.480572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.138 [2024-11-15 12:44:21.480599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.138 [2024-11-15 12:44:21.480625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:21.480638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:56.138 [2024-11-15 12:44:21.480701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226f560 (9): Bad file descriptor 00:22:56.138 [2024-11-15 12:44:21.483925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:56.138 [2024-11-15 12:44:21.513793] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:56.138 8198.50 IOPS, 32.03 MiB/s [2024-11-15T11:44:36.482Z] 8326.00 IOPS, 32.52 MiB/s [2024-11-15T11:44:36.482Z] 8399.25 IOPS, 32.81 MiB/s [2024-11-15T11:44:36.482Z] [2024-11-15 12:44:25.245321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.138 [2024-11-15 12:44:25.245366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.138 [2024-11-15 12:44:25.245424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.138 [2024-11-15 12:44:25.245466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.138 [2024-11-15 12:44:25.245495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.138 [2024-11-15 12:44:25.245522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.138 [2024-11-15 12:44:25.245550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.138 [2024-11-15 12:44:25.245577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.138 [2024-11-15 12:44:25.245604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.138 [2024-11-15 12:44:25.245632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.138 [2024-11-15 12:44:25.245659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.138 [2024-11-15 12:44:25.245686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.138 [2024-11-15 12:44:25.245741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.138 [2024-11-15 12:44:25.245770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.138 [2024-11-15 12:44:25.245798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.138 [2024-11-15 12:44:25.245832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.138 [2024-11-15 12:44:25.245860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.138 [2024-11-15 12:44:25.245888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.138 [2024-11-15 12:44:25.245903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.245916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.245931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.245944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.245959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.245972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.245986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.245999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.139 [2024-11-15 12:44:25.246611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.139 [2024-11-15 12:44:25.246626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.246640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.246655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.246668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.246682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.246696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.246710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.246731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.246747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.246761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.246776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.246789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.246804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.246817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.246831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.246845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.246860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.246873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.246887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.246901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.246916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.246939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.246955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.246968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.246983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.246997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.247025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.247055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.247083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.247111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.247139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.247168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.140 [2024-11-15 12:44:25.247196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.140 [2024-11-15 12:44:25.247245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79640 len:8 PRP1 0x0 PRP2 0x0 00:22:56.140 [2024-11-15 12:44:25.247258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.140 [2024-11-15 12:44:25.247288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.140 [2024-11-15 12:44:25.247299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79648 len:8 PRP1 0x0 PRP2 0x0 00:22:56.140 [2024-11-15 12:44:25.247312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.140 [2024-11-15 12:44:25.247340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.140 [2024-11-15 12:44:25.247351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79656 len:8 PRP1 0x0 PRP2 0x0 00:22:56.140 [2024-11-15 12:44:25.247364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.140 [2024-11-15 12:44:25.247387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.140 [2024-11-15 12:44:25.247399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79664 len:8 PRP1 0x0 PRP2 0x0 00:22:56.140 [2024-11-15 12:44:25.247411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.140 [2024-11-15 12:44:25.247434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.140 [2024-11-15 12:44:25.247445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79672 len:8 PRP1 0x0 PRP2 0x0 00:22:56.140 [2024-11-15 12:44:25.247457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.140 [2024-11-15 12:44:25.247480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.140 [2024-11-15 12:44:25.247492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79680 len:8 PRP1 0x0 PRP2 0x0 00:22:56.140 [2024-11-15 12:44:25.247504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.140 [2024-11-15 12:44:25.247527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.140 [2024-11-15 12:44:25.247538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79688 len:8 PRP1 0x0 PRP2 0x0 00:22:56.140 [2024-11-15 12:44:25.247550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.140 [2024-11-15 12:44:25.247573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.140 [2024-11-15 12:44:25.247584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79696 len:8 PRP1 0x0 PRP2 0x0 00:22:56.140 [2024-11-15 12:44:25.247596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.140 [2024-11-15 12:44:25.247619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.140 [2024-11-15 12:44:25.247630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79704 len:8 PRP1 0x0 PRP2 0x0 00:22:56.140 [2024-11-15 12:44:25.247642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.140 [2024-11-15 12:44:25.247665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.140 [2024-11-15 12:44:25.247676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79712 len:8 PRP1 0x0 PRP2 0x0 00:22:56.140 [2024-11-15 12:44:25.247692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.140 [2024-11-15 12:44:25.247716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.140 [2024-11-15 12:44:25.247735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79720 len:8 PRP1 0x0 PRP2 0x0 00:22:56.140 [2024-11-15 12:44:25.247747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.140 [2024-11-15 12:44:25.247760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.140 [2024-11-15 12:44:25.247771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.140 [2024-11-15 12:44:25.247782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79728 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.247794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.247806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.247817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.247828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79736 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.247840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.247853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.247864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.247874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79744 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.247886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.247898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.247909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.247919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79752 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.247931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.247943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.247953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.247964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79760 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.247976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.247988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.247998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.248009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79768 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.248026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.248039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.248050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.248064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79776 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.248077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.248090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.248100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.248111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79784 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.248123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.248136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.248146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.248157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79792 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.248169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.248182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.248192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.248203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79800 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.248215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.248228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.248239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.248250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79808 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.248262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.248274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.248285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.248296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79816 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.248308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.248320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.248330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.248341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79824 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.248353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.248366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.248376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.248387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79832 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.248405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.248418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.248432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.248444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79840 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.248456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.248469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.248479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.248490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79848 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.248503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.248515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.248526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.248536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79856 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.248549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.248561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.248572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.248583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79864 len:8 PRP1 0x0 PRP2 0x0 00:22:56.141 [2024-11-15 12:44:25.248595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.141 [2024-11-15 12:44:25.248607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.141 [2024-11-15 12:44:25.248618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.141 [2024-11-15 12:44:25.248629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79872 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.248641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.248654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.248664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.248675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79880 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.248687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.248700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.248710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.248728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79888 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.248743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.248756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.248767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.248778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79896 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.248796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.248813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.248824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.248835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79904 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.248848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.248860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.248871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.248882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79912 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.248894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.248907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.248917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.248928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79920 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.248941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.248953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.248964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.248975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79928 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.248987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.249000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.249010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.249022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79936 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.249034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.249047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.249057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.249068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79944 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.249080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.249093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.249104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.249115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79952 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.249127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.249139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.249150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.249160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79960 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.249186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.249200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.249210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.249221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79968 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.249233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.249246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.249257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.249268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79976 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.249280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.249293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.249304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.249314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79984 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.249327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.249339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.249350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.249361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79992 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.249373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.249386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.249402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.249413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80000 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.249426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.249438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.249449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.249459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80008 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.249472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.249484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.249495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.249506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80016 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.249518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.249531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.249541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.249556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80024 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.249569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.249582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.249592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.249603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80032 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.249615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.249628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.142 [2024-11-15 12:44:25.249638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.142 [2024-11-15 12:44:25.249649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80040 len:8 PRP1 0x0 PRP2 0x0 00:22:56.142 [2024-11-15 12:44:25.249661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.142 [2024-11-15 12:44:25.249674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.143 [2024-11-15 12:44:25.249685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.143 [2024-11-15 12:44:25.249696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80048 len:8 PRP1 0x0 PRP2 0x0 00:22:56.143 [2024-11-15 12:44:25.249708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.249730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.143 [2024-11-15 12:44:25.249742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.143 [2024-11-15 12:44:25.249753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80056 len:8 PRP1 0x0 PRP2 0x0 00:22:56.143 [2024-11-15 12:44:25.249765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.249779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.143 [2024-11-15 12:44:25.249795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.143 [2024-11-15 12:44:25.249806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80064 len:8 PRP1 0x0 PRP2 0x0 00:22:56.143 [2024-11-15 12:44:25.249818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.249831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.143 [2024-11-15 12:44:25.249842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.143 [2024-11-15 12:44:25.249853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80072 len:8 PRP1 0x0 PRP2 0x0 00:22:56.143 [2024-11-15 12:44:25.249865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.249877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.143 [2024-11-15 12:44:25.249888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.143 [2024-11-15 12:44:25.249899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80080 len:8 PRP1 0x0 PRP2 0x0 00:22:56.143 [2024-11-15 12:44:25.249911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.249927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.143 [2024-11-15 12:44:25.249938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.143 [2024-11-15 12:44:25.249949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80088 len:8 PRP1 0x0 PRP2 0x0 00:22:56.143 [2024-11-15 12:44:25.249962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.249975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.143 [2024-11-15 12:44:25.249985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.143 [2024-11-15 12:44:25.249996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80096 len:8 PRP1 0x0 PRP2 0x0 00:22:56.143 [2024-11-15 12:44:25.250008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.250020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.143 [2024-11-15 12:44:25.250031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.143 [2024-11-15 12:44:25.250042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80104 len:8 PRP1 0x0 PRP2 0x0 00:22:56.143 [2024-11-15 12:44:25.250054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.250066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.143 [2024-11-15 12:44:25.250076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.143 [2024-11-15 12:44:25.250087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80112 len:8 PRP1 0x0 PRP2 0x0 00:22:56.143 [2024-11-15 12:44:25.250099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.250112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.143 [2024-11-15 12:44:25.250122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.143 [2024-11-15 12:44:25.250133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80120 len:8 PRP1 0x0 PRP2 0x0 00:22:56.143 [2024-11-15 12:44:25.250145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.250158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.143 [2024-11-15 12:44:25.250174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.143 [2024-11-15 12:44:25.250185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80128 len:8 PRP1 0x0 PRP2 0x0 00:22:56.143 [2024-11-15 12:44:25.250197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.250210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.143 [2024-11-15 12:44:25.250221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.143 [2024-11-15 12:44:25.250232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80136 len:8 PRP1 0x0 PRP2 0x0 00:22:56.143 [2024-11-15 12:44:25.250244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.250257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.143 [2024-11-15 12:44:25.250267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.143 [2024-11-15 12:44:25.250278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80144 len:8 PRP1 0x0 PRP2 0x0 00:22:56.143 [2024-11-15 12:44:25.250294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.250307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.143 [2024-11-15 12:44:25.250318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.143 [2024-11-15 12:44:25.250328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80152 len:8 PRP1 0x0 PRP2 0x0 00:22:56.143 [2024-11-15 12:44:25.250341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.250353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.143 [2024-11-15 12:44:25.250364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.143 [2024-11-15 12:44:25.250374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80160 len:8 PRP1 0x0 PRP2 0x0 00:22:56.143 [2024-11-15 12:44:25.250387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.250455] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:56.143 [2024-11-15 12:44:25.250493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.143 [2024-11-15 12:44:25.250512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.250527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.143 [2024-11-15 12:44:25.250540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.250560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.143 [2024-11-15 12:44:25.250582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.250597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.143 [2024-11-15 12:44:25.250610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.143 [2024-11-15 12:44:25.250623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:56.143 [2024-11-15 12:44:25.253851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:56.143 [2024-11-15 12:44:25.253892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226f560 (9): Bad file descriptor 00:22:56.143 8263.60 IOPS, 32.28 MiB/s [2024-11-15T11:44:36.487Z] [2024-11-15 12:44:25.372066] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:56.144 8255.17 IOPS, 32.25 MiB/s [2024-11-15T11:44:36.488Z] 8265.43 IOPS, 32.29 MiB/s [2024-11-15T11:44:36.488Z] 8281.88 IOPS, 32.35 MiB/s [2024-11-15T11:44:36.488Z] 8294.56 IOPS, 32.40 MiB/s [2024-11-15T11:44:36.488Z] [2024-11-15 12:44:29.847433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.144 [2024-11-15 12:44:29.847474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.847492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.144 [2024-11-15 12:44:29.847506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.847520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.144 [2024-11-15 12:44:29.847539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.847554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.144 [2024-11-15 12:44:29.847567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.847579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f560 is same with the state(6) to be set 00:22:56.144 [2024-11-15 12:44:29.847652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.847673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.847696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.847715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.847741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.847755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.847771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.847784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.847798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.847812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.847827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.847840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.847856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.847870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.847884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.847898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.847913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.847926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.847940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.847953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.847968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.847987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.848002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.848031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.848047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.848060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.848074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.848088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.848102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.848115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.848129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.848142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.848156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.848169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.848183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.848196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.848210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.848223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.848237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.848250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.848264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.848277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.848291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.848304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.848318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.848331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.848348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.848362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.848376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.848389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.144 [2024-11-15 12:44:29.848403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.144 [2024-11-15 12:44:29.848415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.848974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.848989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.849038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.849064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.849091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.849121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.849148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.849174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.849200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.849227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.849253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.849279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.849306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.849332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.849358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.849384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.145 [2024-11-15 12:44:29.849412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.145 [2024-11-15 12:44:29.849425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.146 [2024-11-15 12:44:29.849456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.146 [2024-11-15 12:44:29.849483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.849511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.146 [2024-11-15 12:44:29.849538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.146 [2024-11-15 12:44:29.849565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.146 [2024-11-15 12:44:29.849593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.146 [2024-11-15 12:44:29.849619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.146 [2024-11-15 12:44:29.849646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.146 [2024-11-15 12:44:29.849672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.146 [2024-11-15 12:44:29.849715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.146 [2024-11-15 12:44:29.849752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.849780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.849812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:27872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.849840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.849868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.849897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.849925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.849953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.849981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.849995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:27992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.146 [2024-11-15 12:44:29.850463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.146 [2024-11-15 12:44:29.850477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.850974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.850987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.851002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.851023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.851038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.851051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.851066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.851079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.851094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.851107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.851122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.851135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.851150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.851163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.851178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.851192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.851206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.851219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.851234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.851247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.851262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.851275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.851294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.851308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.851322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.851336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.851351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.147 [2024-11-15 12:44:29.851364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.851398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.147 [2024-11-15 12:44:29.851414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.147 [2024-11-15 12:44:29.851426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28296 len:8 PRP1 0x0 PRP2 0x0 00:22:56.147 [2024-11-15 12:44:29.851439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.147 [2024-11-15 12:44:29.851500] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:56.147 [2024-11-15 12:44:29.851519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:56.147 [2024-11-15 12:44:29.854789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:56.147 [2024-11-15 12:44:29.854827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226f560 (9): Bad file descriptor 00:22:56.147 [2024-11-15 12:44:30.013020] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:56.147 8180.60 IOPS, 31.96 MiB/s [2024-11-15T11:44:36.491Z] 8219.00 IOPS, 32.11 MiB/s [2024-11-15T11:44:36.491Z] 8235.92 IOPS, 32.17 MiB/s [2024-11-15T11:44:36.491Z] 8249.92 IOPS, 32.23 MiB/s [2024-11-15T11:44:36.491Z] 8261.14 IOPS, 32.27 MiB/s [2024-11-15T11:44:36.491Z] 8275.27 IOPS, 32.33 MiB/s 00:22:56.147 Latency(us) 00:22:56.147 [2024-11-15T11:44:36.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.147 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:56.147 Verification LBA range: start 0x0 length 0x4000 00:22:56.147 NVMe0n1 : 15.01 8278.70 32.34 849.88 0.00 13989.89 540.07 19612.25 00:22:56.147 [2024-11-15T11:44:36.491Z] =================================================================================================================== 00:22:56.147 [2024-11-15T11:44:36.491Z] Total : 8278.70 32.34 849.88 0.00 13989.89 540.07 19612.25 00:22:56.147 Received shutdown signal, test time was about 15.000000 seconds 00:22:56.147 00:22:56.148 Latency(us) 00:22:56.148 [2024-11-15T11:44:36.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.148 [2024-11-15T11:44:36.492Z] =================================================================================================================== 00:22:56.148 [2024-11-15T11:44:36.492Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.148 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:56.148 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:56.148 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:56.148 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1094700 00:22:56.148 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:56.148 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1094700 /var/tmp/bdevperf.sock 00:22:56.148 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1094700 ']' 00:22:56.148 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.148 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.148 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.148 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.148 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:56.148 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.148 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:56.148 12:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:56.148 [2024-11-15 12:44:36.191053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:56.148 12:44:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:56.148 [2024-11-15 12:44:36.455746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:56.407 12:44:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:56.665 NVMe0n1 00:22:56.665 12:44:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:57.233 00:22:57.233 12:44:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:57.491 00:22:57.491 12:44:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:57.491 12:44:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:57.749 12:44:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:58.317 12:44:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:01.612 12:44:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:01.612 12:44:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:01.612 12:44:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1095419 00:23:01.612 12:44:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:01.612 12:44:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1095419 00:23:02.550 { 00:23:02.550 "results": [ 00:23:02.550 { 00:23:02.550 "job": "NVMe0n1", 00:23:02.550 "core_mask": "0x1", 00:23:02.550 "workload": "verify", 00:23:02.550 "status": "finished", 00:23:02.550 "verify_range": { 00:23:02.550 "start": 0, 00:23:02.550 "length": 16384 00:23:02.550 }, 00:23:02.550 "queue_depth": 128, 00:23:02.550 "io_size": 4096, 00:23:02.550 "runtime": 1.011203, 00:23:02.550 "iops": 8500.765919404907, 00:23:02.550 "mibps": 33.206116872675416, 00:23:02.550 "io_failed": 0, 00:23:02.550 "io_timeout": 0, 00:23:02.550 "avg_latency_us": 14961.24233217862, 00:23:02.550 "min_latency_us": 1929.671111111111, 00:23:02.550 "max_latency_us": 15631.54962962963 00:23:02.550 } 00:23:02.550 ], 00:23:02.550 "core_count": 1 00:23:02.550 } 00:23:02.550 12:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:02.550 [2024-11-15 12:44:35.707588] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:23:02.550 [2024-11-15 12:44:35.707679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094700 ] 00:23:02.550 [2024-11-15 12:44:35.776019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.550 [2024-11-15 12:44:35.831386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.550 [2024-11-15 12:44:38.334613] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:02.550 [2024-11-15 12:44:38.334698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.550 [2024-11-15 12:44:38.334754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.550 [2024-11-15 12:44:38.334771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.550 [2024-11-15 12:44:38.334786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.550 [2024-11-15 12:44:38.334800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.550 [2024-11-15 12:44:38.334814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.550 [2024-11-15 12:44:38.334829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.550 [2024-11-15 12:44:38.334842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.550 [2024-11-15 12:44:38.334857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:02.550 [2024-11-15 12:44:38.334901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:02.550 [2024-11-15 12:44:38.334932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198b560 (9): Bad file descriptor 00:23:02.550 [2024-11-15 12:44:38.436865] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:02.550 Running I/O for 1 seconds... 00:23:02.550 8413.00 IOPS, 32.86 MiB/s 00:23:02.550 Latency(us) 00:23:02.550 [2024-11-15T11:44:42.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.550 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:02.550 Verification LBA range: start 0x0 length 0x4000 00:23:02.550 NVMe0n1 : 1.01 8500.77 33.21 0.00 0.00 14961.24 1929.67 15631.55 00:23:02.550 [2024-11-15T11:44:42.894Z] =================================================================================================================== 00:23:02.550 [2024-11-15T11:44:42.894Z] Total : 8500.77 33.21 0.00 0.00 14961.24 1929.67 15631.55 00:23:02.550 12:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:02.550 12:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:02.809 12:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:03.067 12:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:03.067 12:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:03.325 12:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:03.583 12:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:06.876 12:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:06.876 12:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:06.876 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1094700 00:23:06.876 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1094700 ']' 00:23:06.876 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1094700 00:23:06.876 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:06.876 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.876 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1094700 00:23:07.134 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:07.134 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:07.134 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1094700' 00:23:07.134 killing process with pid 1094700 00:23:07.134 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1094700 00:23:07.134 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1094700 00:23:07.134 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:07.134 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:07.393 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:07.393 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:07.393 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:07.393 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:07.393 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:07.393 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:07.394 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:07.394 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:07.394 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:07.394 rmmod nvme_tcp 00:23:07.394 rmmod nvme_fabrics 00:23:07.654 rmmod nvme_keyring 00:23:07.654 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:07.654 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:07.654 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:07.654 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1092485 ']' 00:23:07.654 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1092485 00:23:07.654 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1092485 ']' 00:23:07.654 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1092485 00:23:07.654 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:07.654 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.654 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1092485 00:23:07.654 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:07.654 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:07.654 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1092485' 00:23:07.654 killing process with pid 1092485 00:23:07.654 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1092485 00:23:07.654 12:44:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1092485 00:23:07.914 12:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:07.914 12:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:07.914 12:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:07.914 12:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:07.914 12:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:07.914 12:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:07.914 12:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:07.914 12:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:07.914 12:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:07.914 12:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.914 12:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.914 12:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.823 12:44:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:09.823 00:23:09.823 real 0m35.823s 00:23:09.823 user 2m6.743s 00:23:09.823 sys 0m5.775s 00:23:09.823 12:44:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.823 12:44:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:09.823 ************************************ 00:23:09.823 END TEST nvmf_failover 00:23:09.823 ************************************ 00:23:09.823 12:44:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:09.823 12:44:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:09.823 12:44:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.823 12:44:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.823 ************************************ 00:23:09.823 START TEST nvmf_host_discovery 00:23:09.823 ************************************ 00:23:09.823 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:10.082 * Looking for test storage... 00:23:10.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:10.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.082 --rc genhtml_branch_coverage=1 00:23:10.082 --rc genhtml_function_coverage=1 00:23:10.082 --rc genhtml_legend=1 00:23:10.082 --rc geninfo_all_blocks=1 00:23:10.082 --rc geninfo_unexecuted_blocks=1 00:23:10.082 00:23:10.082 ' 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:10.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.082 --rc genhtml_branch_coverage=1 00:23:10.082 --rc genhtml_function_coverage=1 00:23:10.082 --rc genhtml_legend=1 00:23:10.082 --rc geninfo_all_blocks=1 00:23:10.082 --rc geninfo_unexecuted_blocks=1 00:23:10.082 00:23:10.082 ' 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:10.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.082 --rc genhtml_branch_coverage=1 00:23:10.082 --rc genhtml_function_coverage=1 00:23:10.082 --rc genhtml_legend=1 00:23:10.082 --rc geninfo_all_blocks=1 00:23:10.082 --rc geninfo_unexecuted_blocks=1 00:23:10.082 00:23:10.082 ' 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:10.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.082 --rc genhtml_branch_coverage=1 00:23:10.082 --rc genhtml_function_coverage=1 00:23:10.082 --rc genhtml_legend=1 00:23:10.082 --rc geninfo_all_blocks=1 00:23:10.082 --rc geninfo_unexecuted_blocks=1 00:23:10.082 00:23:10.082 ' 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.082 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:10.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:10.083 12:44:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.616 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:12.616 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:12.616 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:12.616 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:12.616 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:12.616 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:12.616 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:12.616 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:12.616 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:12.616 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:12.616 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:12.616 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:12.616 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:12.616 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:12.617 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:12.617 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:12.617 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:12.617 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.617 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:12.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:23:12.618 00:23:12.618 --- 10.0.0.2 ping statistics --- 00:23:12.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.618 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:23:12.618 00:23:12.618 --- 10.0.0.1 ping statistics --- 00:23:12.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.618 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1098038 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1098038 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1098038 ']' 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.618 [2024-11-15 12:44:52.660138] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:23:12.618 [2024-11-15 12:44:52.660225] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.618 [2024-11-15 12:44:52.735369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.618 [2024-11-15 12:44:52.791675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.618 [2024-11-15 12:44:52.791733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.618 [2024-11-15 12:44:52.791763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.618 [2024-11-15 12:44:52.791774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.618 [2024-11-15 12:44:52.791783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.618 [2024-11-15 12:44:52.792356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.618 [2024-11-15 12:44:52.939640] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.618 [2024-11-15 12:44:52.947861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:12.618 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.619 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:12.619 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.619 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.878 null0 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.878 null1 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1098172 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1098172 /tmp/host.sock 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1098172 ']' 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:12.878 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.878 12:44:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.878 [2024-11-15 12:44:53.020820] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:23:12.878 [2024-11-15 12:44:53.020901] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1098172 ] 00:23:12.878 [2024-11-15 12:44:53.085611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.878 [2024-11-15 12:44:53.142416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.137 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:13.138 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.397 [2024-11-15 12:44:53.577486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.397 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.398 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.398 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.398 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.398 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.398 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.657 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:13.657 12:44:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:14.227 [2024-11-15 12:44:54.357949] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:14.227 [2024-11-15 12:44:54.357991] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:14.227 [2024-11-15 12:44:54.358029] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:14.227 [2024-11-15 12:44:54.445314] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:14.227 [2024-11-15 12:44:54.506098] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:14.227 [2024-11-15 12:44:54.507042] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x84ef80:1 started. 00:23:14.227 [2024-11-15 12:44:54.508821] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:14.227 [2024-11-15 12:44:54.508842] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:14.227 [2024-11-15 12:44:54.515351] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x84ef80 was disconnected and freed. delete nvme_qpair. 00:23:14.485 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:14.485 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:14.485 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:14.485 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:14.485 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:14.485 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.485 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:14.485 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.485 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:14.485 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.485 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.485 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:14.485 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:14.485 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:14.485 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:14.486 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:14.486 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:14.486 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:14.486 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.486 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:14.486 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.486 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:14.486 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.486 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:14.486 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.746 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:14.746 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:14.747 12:44:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:15.006 [2024-11-15 12:44:55.144947] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x84f660:1 started. 00:23:15.006 [2024-11-15 12:44:55.148252] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x84f660 was disconnected and freed. delete nvme_qpair. 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.006 [2024-11-15 12:44:55.218166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:15.006 [2024-11-15 12:44:55.218581] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:15.006 [2024-11-15 12:44:55.218627] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.006 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.007 [2024-11-15 12:44:55.306303] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:15.007 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.265 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:15.265 12:44:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:15.265 [2024-11-15 12:44:55.372051] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:15.265 [2024-11-15 12:44:55.372115] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:15.265 [2024-11-15 12:44:55.372131] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:15.265 [2024-11-15 12:44:55.372140] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.205 [2024-11-15 12:44:56.450396] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:16.205 [2024-11-15 12:44:56.450438] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:16.205 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.206 [2024-11-15 12:44:56.459921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.206 [2024-11-15 12:44:56.459957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.206 [2024-11-15 12:44:56.459984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.206 [2024-11-15 12:44:56.459998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.206 [2024-11-15 12:44:56.460012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.206 [2024-11-15 12:44:56.460035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.206 [2024-11-15 12:44:56.460050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.206 [2024-11-15 12:44:56.460064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.206 [2024-11-15 12:44:56.460077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81f550 is same with the state(6) to be set 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.206 [2024-11-15 12:44:56.469914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81f550 (9): Bad file descriptor 00:23:16.206 [2024-11-15 12:44:56.479956] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:16.206 [2024-11-15 12:44:56.479978] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:16.206 [2024-11-15 12:44:56.479989] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:16.206 [2024-11-15 12:44:56.479997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:16.206 [2024-11-15 12:44:56.480050] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:16.206 [2024-11-15 12:44:56.480243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.206 [2024-11-15 12:44:56.480274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81f550 with addr=10.0.0.2, port=4420 00:23:16.206 [2024-11-15 12:44:56.480290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81f550 is same with the state(6) to be set 00:23:16.206 [2024-11-15 12:44:56.480314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81f550 (9): Bad file descriptor 00:23:16.206 [2024-11-15 12:44:56.480337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:16.206 [2024-11-15 12:44:56.480352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:16.206 [2024-11-15 12:44:56.480367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:16.206 [2024-11-15 12:44:56.480381] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:16.206 [2024-11-15 12:44:56.480392] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:16.206 [2024-11-15 12:44:56.480400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:16.206 [2024-11-15 12:44:56.490082] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:16.206 [2024-11-15 12:44:56.490101] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:16.206 [2024-11-15 12:44:56.490110] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:16.206 [2024-11-15 12:44:56.490117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:16.206 [2024-11-15 12:44:56.490154] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:16.206 [2024-11-15 12:44:56.490384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.206 [2024-11-15 12:44:56.490411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81f550 with addr=10.0.0.2, port=4420 00:23:16.206 [2024-11-15 12:44:56.490428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81f550 is same with the state(6) to be set 00:23:16.206 [2024-11-15 12:44:56.490467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81f550 (9): Bad file descriptor 00:23:16.206 [2024-11-15 12:44:56.490503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:16.206 [2024-11-15 12:44:56.490521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:16.206 [2024-11-15 12:44:56.490534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:16.206 [2024-11-15 12:44:56.490546] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:16.206 [2024-11-15 12:44:56.490555] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:16.206 [2024-11-15 12:44:56.490562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:16.206 [2024-11-15 12:44:56.500187] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:16.206 [2024-11-15 12:44:56.500220] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:16.206 [2024-11-15 12:44:56.500229] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:16.206 [2024-11-15 12:44:56.500236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:16.206 [2024-11-15 12:44:56.500273] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.206 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.206 [2024-11-15 12:44:56.501387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.206 [2024-11-15 12:44:56.501431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81f550 with addr=10.0.0.2, port=4420 00:23:16.206 [2024-11-15 12:44:56.501449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81f550 is same with the state(6) to be set 00:23:16.206 [2024-11-15 12:44:56.501472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81f550 (9): Bad file descriptor 00:23:16.206 [2024-11-15 12:44:56.501520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:16.206 [2024-11-15 12:44:56.501539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:16.206 [2024-11-15 12:44:56.501558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:16.206 [2024-11-15 12:44:56.501571] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:16.206 [2024-11-15 12:44:56.501580] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:16.206 [2024-11-15 12:44:56.501588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:16.206 [2024-11-15 12:44:56.510307] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:16.206 [2024-11-15 12:44:56.510345] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:16.206 [2024-11-15 12:44:56.510354] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:16.206 [2024-11-15 12:44:56.510361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:16.206 [2024-11-15 12:44:56.510401] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:16.206 [2024-11-15 12:44:56.510599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.206 [2024-11-15 12:44:56.510627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81f550 with addr=10.0.0.2, port=4420 00:23:16.206 [2024-11-15 12:44:56.510643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81f550 is same with the state(6) to be set 00:23:16.206 [2024-11-15 12:44:56.510666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81f550 (9): Bad file descriptor 00:23:16.206 [2024-11-15 12:44:56.510687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:16.207 [2024-11-15 12:44:56.510701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:16.207 [2024-11-15 12:44:56.510714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:16.207 [2024-11-15 12:44:56.510737] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:16.207 [2024-11-15 12:44:56.510748] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:16.207 [2024-11-15 12:44:56.510755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:16.207 [2024-11-15 12:44:56.520434] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:16.207 [2024-11-15 12:44:56.520454] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:16.207 [2024-11-15 12:44:56.520463] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:16.207 [2024-11-15 12:44:56.520471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:16.207 [2024-11-15 12:44:56.520508] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:16.207 [2024-11-15 12:44:56.520725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.207 [2024-11-15 12:44:56.520753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81f550 with addr=10.0.0.2, port=4420 00:23:16.207 [2024-11-15 12:44:56.520770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81f550 is same with the state(6) to be set 00:23:16.207 [2024-11-15 12:44:56.520792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81f550 (9): Bad file descriptor 00:23:16.207 [2024-11-15 12:44:56.520813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:16.207 [2024-11-15 12:44:56.520833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:16.207 [2024-11-15 12:44:56.520847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:16.207 [2024-11-15 12:44:56.520859] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:16.207 [2024-11-15 12:44:56.520869] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:16.207 [2024-11-15 12:44:56.520876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:16.207 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.207 [2024-11-15 12:44:56.530542] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:16.207 [2024-11-15 12:44:56.530562] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:16.207 [2024-11-15 12:44:56.530570] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:16.207 [2024-11-15 12:44:56.530577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:16.207 [2024-11-15 12:44:56.530615] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:16.207 [2024-11-15 12:44:56.530757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.207 [2024-11-15 12:44:56.530786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81f550 with addr=10.0.0.2, port=4420 00:23:16.207 [2024-11-15 12:44:56.530802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81f550 is same with the state(6) to be set 00:23:16.207 [2024-11-15 12:44:56.530825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81f550 (9): Bad file descriptor 00:23:16.207 [2024-11-15 12:44:56.530846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:16.207 [2024-11-15 12:44:56.530860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:16.207 [2024-11-15 12:44:56.530874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:16.207 [2024-11-15 12:44:56.530886] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:16.207 [2024-11-15 12:44:56.530895] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:16.207 [2024-11-15 12:44:56.530903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:16.207 [2024-11-15 12:44:56.540649] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:16.207 [2024-11-15 12:44:56.540682] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:16.207 [2024-11-15 12:44:56.540691] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:16.207 [2024-11-15 12:44:56.540713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:16.207 [2024-11-15 12:44:56.540746] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:16.207 [2024-11-15 12:44:56.540885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.207 [2024-11-15 12:44:56.540913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81f550 with addr=10.0.0.2, port=4420 00:23:16.207 [2024-11-15 12:44:56.540929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81f550 is same with the state(6) to be set 00:23:16.207 [2024-11-15 12:44:56.540951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81f550 (9): Bad file descriptor 00:23:16.207 [2024-11-15 12:44:56.540978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:16.207 [2024-11-15 12:44:56.540992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:16.207 [2024-11-15 12:44:56.541005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:16.207 [2024-11-15 12:44:56.541018] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:16.207 [2024-11-15 12:44:56.541027] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:16.207 [2024-11-15 12:44:56.541034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:16.207 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:16.207 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:16.207 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:16.207 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:16.207 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:16.207 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:16.207 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:16.207 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:16.207 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:16.207 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:16.207 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.207 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.207 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:16.207 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:16.468 [2024-11-15 12:44:56.550781] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:16.468 [2024-11-15 12:44:56.550805] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:16.468 [2024-11-15 12:44:56.550816] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:16.468 [2024-11-15 12:44:56.550824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:16.468 [2024-11-15 12:44:56.550866] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:16.468 [2024-11-15 12:44:56.550963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.468 [2024-11-15 12:44:56.550992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81f550 with addr=10.0.0.2, port=4420 00:23:16.468 [2024-11-15 12:44:56.551008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81f550 is same with the state(6) to be set 00:23:16.468 [2024-11-15 12:44:56.551030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81f550 (9): Bad file descriptor 00:23:16.468 [2024-11-15 12:44:56.551051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:16.468 [2024-11-15 12:44:56.551065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:16.468 [2024-11-15 12:44:56.551084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:16.468 [2024-11-15 12:44:56.551096] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:16.468 [2024-11-15 12:44:56.551105] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:16.468 [2024-11-15 12:44:56.551113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:16.468 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.468 [2024-11-15 12:44:56.560899] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:16.468 [2024-11-15 12:44:56.560920] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:16.468 [2024-11-15 12:44:56.560930] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:16.468 [2024-11-15 12:44:56.560937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:16.468 [2024-11-15 12:44:56.560975] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:16.468 [2024-11-15 12:44:56.561141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.468 [2024-11-15 12:44:56.561168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81f550 with addr=10.0.0.2, port=4420 00:23:16.468 [2024-11-15 12:44:56.561184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81f550 is same with the state(6) to be set 00:23:16.468 [2024-11-15 12:44:56.561206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81f550 (9): Bad file descriptor 00:23:16.468 [2024-11-15 12:44:56.561226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:16.468 [2024-11-15 12:44:56.561240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:16.468 [2024-11-15 12:44:56.561254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:16.468 [2024-11-15 12:44:56.561266] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:16.468 [2024-11-15 12:44:56.561275] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:16.468 [2024-11-15 12:44:56.561283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:16.468 [2024-11-15 12:44:56.571024] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:16.468 [2024-11-15 12:44:56.571044] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:16.468 [2024-11-15 12:44:56.571052] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:16.468 [2024-11-15 12:44:56.571060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:16.468 [2024-11-15 12:44:56.571097] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:16.468 [2024-11-15 12:44:56.571305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.468 [2024-11-15 12:44:56.571333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81f550 with addr=10.0.0.2, port=4420 00:23:16.468 [2024-11-15 12:44:56.571349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81f550 is same with the state(6) to be set 00:23:16.468 [2024-11-15 12:44:56.571370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81f550 (9): Bad file descriptor 00:23:16.468 [2024-11-15 12:44:56.571409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:16.468 [2024-11-15 12:44:56.571427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:16.468 [2024-11-15 12:44:56.571440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:16.468 [2024-11-15 12:44:56.571452] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:16.468 [2024-11-15 12:44:56.571461] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:16.468 [2024-11-15 12:44:56.571468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:16.468 [2024-11-15 12:44:56.577418] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:16.468 [2024-11-15 12:44:56.577446] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:16.468 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:23:16.468 12:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.409 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.669 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:17.669 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:17.669 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:17.669 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:17.669 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:17.669 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:17.669 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:17.669 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:17.669 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:17.669 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:17.669 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:17.669 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:17.669 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.670 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.670 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.670 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:17.670 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:17.670 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:17.670 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:17.670 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:17.670 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.670 12:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.611 [2024-11-15 12:44:58.870369] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:18.611 [2024-11-15 12:44:58.870402] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:18.611 [2024-11-15 12:44:58.870425] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:18.871 [2024-11-15 12:44:58.999851] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:18.871 [2024-11-15 12:44:59.101633] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:18.871 [2024-11-15 12:44:59.102412] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x8357b0:1 started. 00:23:18.871 [2024-11-15 12:44:59.104504] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:18.871 [2024-11-15 12:44:59.104549] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 request: 00:23:18.871 { 00:23:18.871 "name": "nvme", 00:23:18.871 "trtype": "tcp", 00:23:18.871 "traddr": "10.0.0.2", 00:23:18.871 "adrfam": "ipv4", 00:23:18.871 "trsvcid": "8009", 00:23:18.871 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:18.871 "wait_for_attach": true, 00:23:18.871 "method": "bdev_nvme_start_discovery", 00:23:18.871 "req_id": 1 00:23:18.871 } 00:23:18.871 Got JSON-RPC error response 00:23:18.871 response: 00:23:18.871 { 00:23:18.871 "code": -17, 00:23:18.871 "message": "File exists" 00:23:18.871 } 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.871 [2024-11-15 12:44:59.148435] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x8357b0 was disconnected and freed. delete nvme_qpair. 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.871 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.130 request: 00:23:19.130 { 00:23:19.130 "name": "nvme_second", 00:23:19.130 "trtype": "tcp", 00:23:19.130 "traddr": "10.0.0.2", 00:23:19.130 "adrfam": "ipv4", 00:23:19.130 "trsvcid": "8009", 00:23:19.130 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:19.130 "wait_for_attach": true, 00:23:19.130 "method": "bdev_nvme_start_discovery", 00:23:19.130 "req_id": 1 00:23:19.130 } 00:23:19.130 Got JSON-RPC error response 00:23:19.130 response: 00:23:19.130 { 00:23:19.130 "code": -17, 00:23:19.130 "message": "File exists" 00:23:19.130 } 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.130 12:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.066 [2024-11-15 12:45:00.311911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.066 [2024-11-15 12:45:00.311966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8509a0 with addr=10.0.0.2, port=8010 00:23:20.066 [2024-11-15 12:45:00.311997] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:20.066 [2024-11-15 12:45:00.312012] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:20.066 [2024-11-15 12:45:00.312025] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:21.004 [2024-11-15 12:45:01.314437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.004 [2024-11-15 12:45:01.314492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85aff0 with addr=10.0.0.2, port=8010 00:23:21.004 [2024-11-15 12:45:01.314522] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:21.004 [2024-11-15 12:45:01.314536] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:21.004 [2024-11-15 12:45:01.314550] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:22.391 [2024-11-15 12:45:02.316604] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:22.391 request: 00:23:22.391 { 00:23:22.391 "name": "nvme_second", 00:23:22.391 "trtype": "tcp", 00:23:22.391 "traddr": "10.0.0.2", 00:23:22.391 "adrfam": "ipv4", 00:23:22.391 "trsvcid": "8010", 00:23:22.391 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:22.391 "wait_for_attach": false, 00:23:22.391 "attach_timeout_ms": 3000, 00:23:22.391 "method": "bdev_nvme_start_discovery", 00:23:22.391 "req_id": 1 00:23:22.391 } 00:23:22.391 Got JSON-RPC error response 00:23:22.391 response: 00:23:22.391 { 00:23:22.391 "code": -110, 00:23:22.391 "message": "Connection timed out" 00:23:22.391 } 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1098172 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:22.391 rmmod nvme_tcp 00:23:22.391 rmmod nvme_fabrics 00:23:22.391 rmmod nvme_keyring 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1098038 ']' 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1098038 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1098038 ']' 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1098038 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1098038 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1098038' 00:23:22.391 killing process with pid 1098038 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1098038 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1098038 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.391 12:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.931 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:24.931 00:23:24.931 real 0m14.565s 00:23:24.931 user 0m21.516s 00:23:24.932 sys 0m3.077s 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.932 ************************************ 00:23:24.932 END TEST nvmf_host_discovery 00:23:24.932 ************************************ 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.932 ************************************ 00:23:24.932 START TEST nvmf_host_multipath_status 00:23:24.932 ************************************ 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:24.932 * Looking for test storage... 00:23:24.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:24.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.932 --rc genhtml_branch_coverage=1 00:23:24.932 --rc genhtml_function_coverage=1 00:23:24.932 --rc genhtml_legend=1 00:23:24.932 --rc geninfo_all_blocks=1 00:23:24.932 --rc geninfo_unexecuted_blocks=1 00:23:24.932 00:23:24.932 ' 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:24.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.932 --rc genhtml_branch_coverage=1 00:23:24.932 --rc genhtml_function_coverage=1 00:23:24.932 --rc genhtml_legend=1 00:23:24.932 --rc geninfo_all_blocks=1 00:23:24.932 --rc geninfo_unexecuted_blocks=1 00:23:24.932 00:23:24.932 ' 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:24.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.932 --rc genhtml_branch_coverage=1 00:23:24.932 --rc genhtml_function_coverage=1 00:23:24.932 --rc genhtml_legend=1 00:23:24.932 --rc geninfo_all_blocks=1 00:23:24.932 --rc geninfo_unexecuted_blocks=1 00:23:24.932 00:23:24.932 ' 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:24.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.932 --rc genhtml_branch_coverage=1 00:23:24.932 --rc genhtml_function_coverage=1 00:23:24.932 --rc genhtml_legend=1 00:23:24.932 --rc geninfo_all_blocks=1 00:23:24.932 --rc geninfo_unexecuted_blocks=1 00:23:24.932 00:23:24.932 ' 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.932 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:24.933 12:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.839 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:26.839 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:26.840 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:26.840 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:26.840 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.840 12:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:26.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:23:26.840 00:23:26.840 --- 10.0.0.2 ping statistics --- 00:23:26.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.840 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:23:26.840 00:23:26.840 --- 10.0.0.1 ping statistics --- 00:23:26.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.840 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:26.840 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:26.841 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:26.841 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:26.841 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:26.841 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:26.841 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1101464 00:23:26.841 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:26.841 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1101464 00:23:26.841 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1101464 ']' 00:23:26.841 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.841 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.841 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.841 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.841 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:27.099 [2024-11-15 12:45:07.186298] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:23:27.099 [2024-11-15 12:45:07.186389] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.099 [2024-11-15 12:45:07.256726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:27.099 [2024-11-15 12:45:07.313610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.099 [2024-11-15 12:45:07.313658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.099 [2024-11-15 12:45:07.313685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.099 [2024-11-15 12:45:07.313698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.099 [2024-11-15 12:45:07.313725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.099 [2024-11-15 12:45:07.317740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.099 [2024-11-15 12:45:07.317746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.099 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.099 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:27.099 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:27.099 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.099 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:27.358 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.358 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1101464 00:23:27.358 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:27.616 [2024-11-15 12:45:07.722249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.616 12:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:27.875 Malloc0 00:23:27.875 12:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:28.133 12:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:28.391 12:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.649 [2024-11-15 12:45:08.888574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.649 12:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:28.907 [2024-11-15 12:45:09.165305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:28.907 12:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1102248 00:23:28.907 12:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:28.907 12:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.907 12:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1102248 /var/tmp/bdevperf.sock 00:23:28.907 12:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1102248 ']' 00:23:28.907 12:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.907 12:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.907 12:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.907 12:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.907 12:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:29.165 12:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.165 12:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:29.165 12:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:29.423 12:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:30.083 Nvme0n1 00:23:30.083 12:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:30.395 Nvme0n1 00:23:30.395 12:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:30.395 12:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:32.925 12:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:32.925 12:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:32.925 12:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:32.925 12:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:34.297 12:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:34.297 12:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:34.297 12:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.297 12:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:34.297 12:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.297 12:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:34.297 12:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.297 12:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:34.554 12:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:34.554 12:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:34.554 12:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.554 12:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:34.812 12:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.812 12:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:34.812 12:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.812 12:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:35.070 12:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.070 12:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:35.070 12:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.070 12:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:35.329 12:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.329 12:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:35.329 12:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.329 12:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:35.586 12:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.586 12:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:35.586 12:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:35.844 12:45:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:36.411 12:45:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:37.345 12:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:37.346 12:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:37.346 12:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.346 12:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:37.603 12:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:37.603 12:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:37.603 12:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.603 12:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:37.861 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.861 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:37.861 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.861 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:38.119 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.119 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:38.119 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.119 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:38.377 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.377 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:38.377 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.377 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:38.635 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.635 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:38.635 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.635 12:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:38.893 12:45:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.893 12:45:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:38.894 12:45:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:39.150 12:45:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:39.408 12:45:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:40.342 12:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:40.342 12:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:40.342 12:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.342 12:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:40.600 12:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.600 12:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:40.600 12:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.600 12:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:41.166 12:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:41.166 12:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:41.166 12:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.166 12:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:41.166 12:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.166 12:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:41.166 12:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.166 12:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:41.424 12:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.424 12:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:41.424 12:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.424 12:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:41.682 12:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.682 12:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:41.682 12:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.682 12:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:42.248 12:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.248 12:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:42.248 12:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:42.248 12:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:42.506 12:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:43.880 12:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:43.880 12:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:43.880 12:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.880 12:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:43.880 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.880 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:43.880 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.880 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:44.138 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:44.138 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:44.138 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.138 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:44.396 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.396 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:44.397 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.397 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:44.654 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.654 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:44.654 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.654 12:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:44.912 12:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.912 12:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:44.912 12:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.912 12:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:45.170 12:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.171 12:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:45.171 12:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:45.737 12:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:45.737 12:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:47.110 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:47.110 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:47.110 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.110 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:47.110 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.110 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:47.110 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.110 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:47.368 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.368 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:47.368 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.368 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:47.626 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.626 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:47.626 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.626 12:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:47.884 12:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.884 12:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:47.884 12:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.884 12:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:48.142 12:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.142 12:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:48.142 12:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.142 12:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:48.399 12:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.399 12:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:48.399 12:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:48.657 12:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:48.915 12:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:50.289 12:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:50.289 12:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:50.289 12:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.289 12:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:50.289 12:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:50.289 12:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:50.289 12:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.289 12:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:50.547 12:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.547 12:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:50.547 12:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.547 12:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:50.805 12:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.805 12:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:50.805 12:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.805 12:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:51.063 12:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.063 12:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:51.063 12:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.063 12:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:51.321 12:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:51.321 12:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:51.321 12:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.321 12:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:51.579 12:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.579 12:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:51.837 12:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:51.837 12:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:52.404 12:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:52.404 12:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:53.779 12:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:53.779 12:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:53.779 12:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.779 12:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:53.779 12:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.779 12:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:53.779 12:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.779 12:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:54.037 12:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.037 12:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:54.037 12:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.037 12:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:54.295 12:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.295 12:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:54.295 12:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.295 12:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:54.553 12:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.553 12:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:54.553 12:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.553 12:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:54.815 12:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.815 12:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:54.815 12:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.815 12:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:55.073 12:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.073 12:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:55.073 12:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:55.640 12:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:55.640 12:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:57.013 12:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:57.013 12:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:57.013 12:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.013 12:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:57.013 12:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.013 12:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:57.013 12:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.013 12:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:57.272 12:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.272 12:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:57.272 12:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.272 12:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:57.530 12:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.530 12:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:57.530 12:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.530 12:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:57.789 12:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.789 12:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:57.789 12:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.789 12:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:58.047 12:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.047 12:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:58.047 12:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.047 12:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:58.305 12:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.305 12:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:58.305 12:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:58.564 12:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:58.822 12:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:00.196 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:00.196 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:00.196 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.196 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:00.196 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.196 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:00.196 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.196 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:00.454 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.454 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:00.454 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.454 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:00.711 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.711 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:00.711 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.711 12:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:00.968 12:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.968 12:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:00.968 12:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.968 12:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:01.225 12:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.225 12:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:01.225 12:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.225 12:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:01.482 12:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.482 12:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:01.483 12:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:01.766 12:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:02.023 12:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:03.397 12:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:03.397 12:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:03.397 12:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.397 12:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.397 12:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.397 12:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:03.397 12:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.397 12:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.655 12:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.655 12:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.655 12:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.655 12:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:03.913 12:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.913 12:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:03.913 12:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.913 12:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.171 12:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.171 12:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:04.171 12:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.171 12:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.429 12:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.429 12:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:04.429 12:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.429 12:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:05.024 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:05.024 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1102248 00:24:05.024 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1102248 ']' 00:24:05.024 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1102248 00:24:05.024 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:05.024 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.024 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1102248 00:24:05.024 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:05.024 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:05.024 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1102248' 00:24:05.024 killing process with pid 1102248 00:24:05.024 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1102248 00:24:05.024 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1102248 00:24:05.024 { 00:24:05.024 "results": [ 00:24:05.024 { 00:24:05.024 "job": "Nvme0n1", 00:24:05.024 "core_mask": "0x4", 00:24:05.024 "workload": "verify", 00:24:05.024 "status": "terminated", 00:24:05.024 "verify_range": { 00:24:05.024 "start": 0, 00:24:05.024 "length": 16384 00:24:05.024 }, 00:24:05.024 "queue_depth": 128, 00:24:05.024 "io_size": 4096, 00:24:05.024 "runtime": 34.23548, 00:24:05.024 "iops": 7881.881603529438, 00:24:05.024 "mibps": 30.788600013786866, 00:24:05.024 "io_failed": 0, 00:24:05.024 "io_timeout": 0, 00:24:05.024 "avg_latency_us": 16213.409769871858, 00:24:05.024 "min_latency_us": 849.5407407407407, 00:24:05.025 "max_latency_us": 4076242.1096296296 00:24:05.025 } 00:24:05.025 ], 00:24:05.025 "core_count": 1 00:24:05.025 } 00:24:05.025 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1102248 00:24:05.025 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.025 [2024-11-15 12:45:09.231070] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:24:05.025 [2024-11-15 12:45:09.231154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1102248 ] 00:24:05.025 [2024-11-15 12:45:09.298674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.025 [2024-11-15 12:45:09.356847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.025 Running I/O for 90 seconds... 00:24:05.025 8092.00 IOPS, 31.61 MiB/s [2024-11-15T11:45:45.369Z] 8203.50 IOPS, 32.04 MiB/s [2024-11-15T11:45:45.369Z] 8252.00 IOPS, 32.23 MiB/s [2024-11-15T11:45:45.369Z] 8327.00 IOPS, 32.53 MiB/s [2024-11-15T11:45:45.369Z] 8413.40 IOPS, 32.86 MiB/s [2024-11-15T11:45:45.369Z] 8414.67 IOPS, 32.87 MiB/s [2024-11-15T11:45:45.369Z] 8409.00 IOPS, 32.85 MiB/s [2024-11-15T11:45:45.369Z] 8417.50 IOPS, 32.88 MiB/s [2024-11-15T11:45:45.369Z] 8420.44 IOPS, 32.89 MiB/s [2024-11-15T11:45:45.369Z] 8400.90 IOPS, 32.82 MiB/s [2024-11-15T11:45:45.369Z] 8409.09 IOPS, 32.85 MiB/s [2024-11-15T11:45:45.369Z] 8381.50 IOPS, 32.74 MiB/s [2024-11-15T11:45:45.369Z] 8381.62 IOPS, 32.74 MiB/s [2024-11-15T11:45:45.369Z] 8355.07 IOPS, 32.64 MiB/s [2024-11-15T11:45:45.369Z] [2024-11-15 12:45:25.775181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.025 [2024-11-15 12:45:25.775229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.775966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.775988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.776004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.776025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.776041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.776078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.776094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.776116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.776136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.776158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.776174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.776195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.776211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.776232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.776249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.776270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.776286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.776307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.776322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.776343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.776359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.776380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.776396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.776417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.776432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.025 [2024-11-15 12:45:25.776453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.025 [2024-11-15 12:45:25.776469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.776491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.776506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.776527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.776543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.776564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.776579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.776605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.776621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.776642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.776657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.776678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.776708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.776741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.776759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.776781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.776797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.776819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.776835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.776857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.776873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.776895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.776911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.776933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.776949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.776970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.776986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.026 [2024-11-15 12:45:25.777742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.026 [2024-11-15 12:45:25.777767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.777784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.778741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.778767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.778795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.778814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.778836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.778871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.778900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.778918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.778940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.778956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.778978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.778994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.779964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.779980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.780002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.780019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.780041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.780081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.780114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.780147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.780550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.780573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.780600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.780618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.780641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.027 [2024-11-15 12:45:25.780674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.027 [2024-11-15 12:45:25.780700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.780725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.780751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.780768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.780791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.780806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.780828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.780844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.780866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.780892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.780917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.780934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.780957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.780974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.780996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.028 [2024-11-15 12:45:25.781928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.781966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.781988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.782020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.782051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.782069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.782095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.782112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.028 [2024-11-15 12:45:25.782134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.028 [2024-11-15 12:45:25.782149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.782971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.782993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.783009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.783031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.783046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.783068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.783084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.783105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.783121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.783142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.783163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.783186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.783217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.783239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.783254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.029 [2024-11-15 12:45:25.783276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.029 [2024-11-15 12:45:25.783291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.784970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.784996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.785012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.785050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.785067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.785105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.785124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.785145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.785161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.785183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.785198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.785219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.785235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.785256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.785272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.785294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.785309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.785330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.785345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.785367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.785383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.785412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.785430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.030 [2024-11-15 12:45:25.785451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.030 [2024-11-15 12:45:25.785467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.785494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.785510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.785531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.785546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.785568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.785584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.785605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.785621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.785642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.785658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.785688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.785728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.785754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.785770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.785792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.785808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.785830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.785846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.785868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.785883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.785905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.785920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.785942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.785958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.785996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.786658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.786674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.787471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.787495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.787522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.787539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.787562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.787578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.787600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.787616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.787649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.787668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.787691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.031 [2024-11-15 12:45:25.787707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.031 [2024-11-15 12:45:25.787738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.787756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.787783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.787800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.787822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.787838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.787860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.787876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.787897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.787913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.787935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.787952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.787974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.787990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.032 [2024-11-15 12:45:25.788829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.788966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.788983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.789006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.789022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.789060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.789075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.032 [2024-11-15 12:45:25.789097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.032 [2024-11-15 12:45:25.789113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.789960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.789976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.790013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.790029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.790051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.790066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.790088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.790103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.790887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.790923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.790951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.790969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.790991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.791019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.791044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.791061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.791089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.791106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.791129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.791145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.791166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.791196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.791218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.791234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.033 [2024-11-15 12:45:25.791255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.033 [2024-11-15 12:45:25.791271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.791979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.791995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.792017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.792048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.792070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.792090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.792112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.792127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.792157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.792174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.792196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.792212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.792233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.792248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.792269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.792285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.792306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.792321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.792342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.792358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.792379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.792395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.034 [2024-11-15 12:45:25.792416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.034 [2024-11-15 12:45:25.792441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.792464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.792479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.792501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.792517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.792538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.792553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.792579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.792595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.792616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.792632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.792653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.792668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.792689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.792727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.792753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.792769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.792806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.792824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.792846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.792863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.792885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.792900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.792922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.792938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.792961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.792977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.792998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.793013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.793035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.793051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.793078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.793094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.793131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.793147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.793169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.793191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.793213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.793229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.793250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.793266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.793287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.793303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.793324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.793340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.793361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.793377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.793399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.793414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.793436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.793452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.794251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.794275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.794303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.794321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.794345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.794371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.794405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.794424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.794447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.794464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.794486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.794503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.035 [2024-11-15 12:45:25.794525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.035 [2024-11-15 12:45:25.794541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.794563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.794580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.794602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.794619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.794642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.794658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.794680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.794696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.794726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.794744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.794767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.794784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.794806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.794822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.794844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.794865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.794888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.794905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.794926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.794943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.794966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.794983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.036 [2024-11-15 12:45:25.795618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.036 [2024-11-15 12:45:25.795966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.036 [2024-11-15 12:45:25.795983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.796794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.796815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.797607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.797631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.797681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.797707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.797741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.797770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.797796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.797813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.797835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.797851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.797873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.797889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.797911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.797928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.797949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.797965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.797988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.798005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.798029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.798062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.037 [2024-11-15 12:45:25.798084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.037 [2024-11-15 12:45:25.798099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.798966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.798988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.799019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.799041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.799056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.799092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.799108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.799128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.799159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.799181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.799200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.799222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.799245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.799270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.799286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.799308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.799323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.799343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.799359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.799381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.799396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.799417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.799432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.799454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.799468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.799490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.799533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.799560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.799577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.799599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.038 [2024-11-15 12:45:25.799614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.038 [2024-11-15 12:45:25.799636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.799652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.799674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.799696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.799728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.799746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.799769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.799785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.799807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.799822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.799844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.799860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.799882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.799898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.799920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.799936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.799957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.799973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.800010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.800026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.800048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.800078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.800099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.800114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.800135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.800149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.800169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.800184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.800209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.800224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.800245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.800260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.039 [2024-11-15 12:45:25.801687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.039 [2024-11-15 12:45:25.801733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.801752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.801775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.801791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.801813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.801828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.801850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.801866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.801888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.801904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.801926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.801942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.801964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.801984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.040 [2024-11-15 12:45:25.802491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.040 [2024-11-15 12:45:25.802875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.040 [2024-11-15 12:45:25.802897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.802913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.802941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.802958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.802984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.803603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.803619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.804443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.804466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.804493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.804510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.804532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.804548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.804596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.804615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.804638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.804655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.804677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.804693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.804715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.804743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.804766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.804787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.804811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.804828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.804850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.804867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.804888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.804905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.804926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.804943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.804965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.804981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.805018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.805034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.805056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.041 [2024-11-15 12:45:25.805087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.041 [2024-11-15 12:45:25.805108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.805968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.805984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.806006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.806037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.806070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.806086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.806108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.806124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.806145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.806161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.806182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.806197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.806219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.806234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.806255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.806270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.806295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.806324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.806366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.806383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.806405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.806421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.806459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.806474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.806511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.806527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.806550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.806567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.042 [2024-11-15 12:45:25.806590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.042 [2024-11-15 12:45:25.806606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.806628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.806644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.806681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.806697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.806741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.806759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.806783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.806799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.806821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.806838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.806859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.806880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.806903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.806920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.806942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.806958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.806980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.806996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.807032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.807048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.807070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.807085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.807106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.807122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.807865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.807899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.807927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.807946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.807968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.807985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.043 [2024-11-15 12:45:25.808766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.043 [2024-11-15 12:45:25.808783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.808805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.808821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.808843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.808859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.808882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.808898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.808920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.808937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.808959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.808974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.044 [2024-11-15 12:45:25.809371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.809974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.809991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.810028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.810044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.810081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.810096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.810117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.810131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.810152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.810171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.810192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.810207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.810228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.044 [2024-11-15 12:45:25.810243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.044 [2024-11-15 12:45:25.810264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.810279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.810299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.810314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.810334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.810349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.810370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.810385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.810405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.810420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.810440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.810456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.810477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.810492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.811975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.811992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.812032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.812048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.812070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.812100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.812122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.812137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.045 [2024-11-15 12:45:25.812157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.045 [2024-11-15 12:45:25.812172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.812966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.812983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.046 [2024-11-15 12:45:25.813673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.046 [2024-11-15 12:45:25.813689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.813735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.813752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.813790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.813807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.813828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.813845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.813867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.813883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.813905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.813921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.813943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.813959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.813981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.813997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.814019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.814035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.816480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.816503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.816536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.816554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.816577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.816593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.816614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.816656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.816684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.816701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.816731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.816750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.816772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.816789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.816810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.816827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.816849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.816865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.816887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.816903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.816925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.816941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.816963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.816979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.817016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.817032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.817053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.817089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.817111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.817126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.817146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.817161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.817181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.817197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.817218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.817233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.817254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.817268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.817289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.817304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.817324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.817339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.817359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.817374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.817394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.817409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.047 [2024-11-15 12:45:25.817430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.047 [2024-11-15 12:45:25.817445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.817465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.817480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.817500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.817515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.817540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.817555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.817576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.817592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.817612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.817628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.817649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.817664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.817685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.817715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.817747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.817762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.817784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.817799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.817820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.817836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.817856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.817871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.817893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.817908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.817929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.817944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.817965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.048 [2024-11-15 12:45:25.817981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.048 [2024-11-15 12:45:25.818899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.048 [2024-11-15 12:45:25.818915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.818937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.818953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.818975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.818991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.819014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.819030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.819067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.819086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.819110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.819125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.049 8318.87 IOPS, 32.50 MiB/s [2024-11-15T11:45:45.393Z] [2024-11-15 12:45:25.819904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.819938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.819966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.819984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.820972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.820989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.821011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.821027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.821049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.821081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.821103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.821118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.821140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.821155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.821176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.821192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.821214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.821237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.049 [2024-11-15 12:45:25.821261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.049 [2024-11-15 12:45:25.821277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.821969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.821985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.822008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.822024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.822046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.822062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.822084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.822100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.822122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.822153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.822175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.822190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.822211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.822241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.822263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.822278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.822299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.822314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.822334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.822349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.822369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.822383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.822404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.822423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.050 [2024-11-15 12:45:25.822444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.050 [2024-11-15 12:45:25.822459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.822480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.822495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.822516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.822531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.823330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.823399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.823438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.823504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.823543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.823582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.823621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.823674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.823712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.823782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.823820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.823858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.823895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.823933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.823971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.823993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.051 [2024-11-15 12:45:25.824589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.051 [2024-11-15 12:45:25.824610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.824624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.824645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.824660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.824681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.824699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.824743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.824761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.824784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.824800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.824821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.824836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.824857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.824873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.824894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.052 [2024-11-15 12:45:25.824910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.824931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.824946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.824967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.824983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.052 [2024-11-15 12:45:25.825934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.052 [2024-11-15 12:45:25.825956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.825971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.826261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.826354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.826404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.826447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.826490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.826532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.826580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.826639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.826695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.826761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.826803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.826844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.826886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.826927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.826968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.826993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.053 [2024-11-15 12:45:25.827854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.053 [2024-11-15 12:45:25.827879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.827895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.827920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.827935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.827960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.827976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.828976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.828992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:25.829161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:25.829181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.054 7798.94 IOPS, 30.46 MiB/s [2024-11-15T11:45:45.398Z] 7340.18 IOPS, 28.67 MiB/s [2024-11-15T11:45:45.398Z] 6932.39 IOPS, 27.08 MiB/s [2024-11-15T11:45:45.398Z] 6567.53 IOPS, 25.65 MiB/s [2024-11-15T11:45:45.398Z] 6654.65 IOPS, 25.99 MiB/s [2024-11-15T11:45:45.398Z] 6738.81 IOPS, 26.32 MiB/s [2024-11-15T11:45:45.398Z] 6850.00 IOPS, 26.76 MiB/s [2024-11-15T11:45:45.398Z] 7030.48 IOPS, 27.46 MiB/s [2024-11-15T11:45:45.398Z] 7194.54 IOPS, 28.10 MiB/s [2024-11-15T11:45:45.398Z] 7333.76 IOPS, 28.65 MiB/s [2024-11-15T11:45:45.398Z] 7378.08 IOPS, 28.82 MiB/s [2024-11-15T11:45:45.398Z] 7424.85 IOPS, 29.00 MiB/s [2024-11-15T11:45:45.398Z] 7464.43 IOPS, 29.16 MiB/s [2024-11-15T11:45:45.398Z] 7561.72 IOPS, 29.54 MiB/s [2024-11-15T11:45:45.398Z] 7680.07 IOPS, 30.00 MiB/s [2024-11-15T11:45:45.398Z] 7801.87 IOPS, 30.48 MiB/s [2024-11-15T11:45:45.398Z] [2024-11-15 12:45:42.329328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:42.329387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.054 [2024-11-15 12:45:42.329459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.054 [2024-11-15 12:45:42.329480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.329514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.329532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.329553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.329569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.329590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.329605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.332498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.332526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.332581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.055 [2024-11-15 12:45:42.332600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.332623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.055 [2024-11-15 12:45:42.332642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.332663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.055 [2024-11-15 12:45:42.332679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.332715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.055 [2024-11-15 12:45:42.332744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.332769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.055 [2024-11-15 12:45:42.332788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.332810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.055 [2024-11-15 12:45:42.332827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.332849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.055 [2024-11-15 12:45:42.332865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.332888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.332905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.332933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.332951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.332974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.332990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.333012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.333054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.333089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.333106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.333128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.333144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.333166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.333181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.333202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.055 [2024-11-15 12:45:42.333218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.333240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.055 [2024-11-15 12:45:42.333256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.333277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.055 [2024-11-15 12:45:42.333293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.333314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.333330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.333351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.333367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.333388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.333404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.333425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.333448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.333479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.333497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.333519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.333535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.333556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.055 [2024-11-15 12:45:42.333572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.055 [2024-11-15 12:45:42.333593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.333609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.333630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.056 [2024-11-15 12:45:42.333646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.333667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.333683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.333727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.333746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.333770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.333786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.333808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.333824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.333847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.333864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.333886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.333902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.333925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.333946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.336167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.336195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.336223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.336257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.336281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.336297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.336319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.336335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.336357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.336373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.336395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.336410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.336432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.336447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.056 [2024-11-15 12:45:42.336470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.056 [2024-11-15 12:45:42.336486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.056 7852.72 IOPS, 30.67 MiB/s [2024-11-15T11:45:45.400Z] 7861.67 IOPS, 30.71 MiB/s [2024-11-15T11:45:45.400Z] 7878.47 IOPS, 30.78 MiB/s [2024-11-15T11:45:45.400Z] Received shutdown signal, test time was about 34.236272 seconds 00:24:05.056 00:24:05.056 Latency(us) 00:24:05.056 [2024-11-15T11:45:45.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.056 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:05.056 Verification LBA range: start 0x0 length 0x4000 00:24:05.056 Nvme0n1 : 34.24 7881.88 30.79 0.00 0.00 16213.41 849.54 4076242.11 00:24:05.056 [2024-11-15T11:45:45.400Z] =================================================================================================================== 00:24:05.056 [2024-11-15T11:45:45.400Z] Total : 7881.88 30.79 0.00 0.00 16213.41 849.54 4076242.11 00:24:05.056 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.315 rmmod nvme_tcp 00:24:05.315 rmmod nvme_fabrics 00:24:05.315 rmmod nvme_keyring 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1101464 ']' 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1101464 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1101464 ']' 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1101464 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.315 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1101464 00:24:05.573 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:05.573 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:05.573 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1101464' 00:24:05.573 killing process with pid 1101464 00:24:05.573 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1101464 00:24:05.573 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1101464 00:24:05.834 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:05.834 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:05.834 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:05.834 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:05.834 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:05.834 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:05.834 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:05.834 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:05.834 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:05.835 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.835 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.835 12:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.742 12:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:07.742 00:24:07.742 real 0m43.191s 00:24:07.742 user 2m9.900s 00:24:07.742 sys 0m11.549s 00:24:07.742 12:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:07.742 12:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:07.742 ************************************ 00:24:07.742 END TEST nvmf_host_multipath_status 00:24:07.742 ************************************ 00:24:07.742 12:45:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:07.742 12:45:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:07.742 12:45:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:07.742 12:45:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.742 ************************************ 00:24:07.742 START TEST nvmf_discovery_remove_ifc 00:24:07.742 ************************************ 00:24:07.742 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:07.742 * Looking for test storage... 00:24:07.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:07.742 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:07.742 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:24:07.742 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:08.001 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:08.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.002 --rc genhtml_branch_coverage=1 00:24:08.002 --rc genhtml_function_coverage=1 00:24:08.002 --rc genhtml_legend=1 00:24:08.002 --rc geninfo_all_blocks=1 00:24:08.002 --rc geninfo_unexecuted_blocks=1 00:24:08.002 00:24:08.002 ' 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:08.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.002 --rc genhtml_branch_coverage=1 00:24:08.002 --rc genhtml_function_coverage=1 00:24:08.002 --rc genhtml_legend=1 00:24:08.002 --rc geninfo_all_blocks=1 00:24:08.002 --rc geninfo_unexecuted_blocks=1 00:24:08.002 00:24:08.002 ' 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:08.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.002 --rc genhtml_branch_coverage=1 00:24:08.002 --rc genhtml_function_coverage=1 00:24:08.002 --rc genhtml_legend=1 00:24:08.002 --rc geninfo_all_blocks=1 00:24:08.002 --rc geninfo_unexecuted_blocks=1 00:24:08.002 00:24:08.002 ' 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:08.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.002 --rc genhtml_branch_coverage=1 00:24:08.002 --rc genhtml_function_coverage=1 00:24:08.002 --rc genhtml_legend=1 00:24:08.002 --rc geninfo_all_blocks=1 00:24:08.002 --rc geninfo_unexecuted_blocks=1 00:24:08.002 00:24:08.002 ' 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:08.002 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:08.003 12:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.908 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.908 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:09.908 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:09.908 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:09.908 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:09.908 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:09.908 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:09.908 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:09.909 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:09.909 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:09.909 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:09.909 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.909 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.168 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.168 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.168 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:10.168 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.168 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.168 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.168 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:10.168 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:10.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:24:10.168 00:24:10.168 --- 10.0.0.2 ping statistics --- 00:24:10.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.168 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:24:10.169 00:24:10.169 --- 10.0.0.1 ping statistics --- 00:24:10.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.169 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1108728 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1108728 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1108728 ']' 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.169 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.169 [2024-11-15 12:45:50.392778] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:24:10.169 [2024-11-15 12:45:50.392857] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.169 [2024-11-15 12:45:50.465594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.427 [2024-11-15 12:45:50.521748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.427 [2024-11-15 12:45:50.521799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.427 [2024-11-15 12:45:50.521827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.427 [2024-11-15 12:45:50.521839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.427 [2024-11-15 12:45:50.521849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.427 [2024-11-15 12:45:50.522460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.427 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.427 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:10.427 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:10.427 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:10.427 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.427 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.427 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:10.427 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.427 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.427 [2024-11-15 12:45:50.672616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.427 [2024-11-15 12:45:50.680846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:10.427 null0 00:24:10.427 [2024-11-15 12:45:50.712772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.427 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.428 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1108753 00:24:10.428 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:10.428 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1108753 /tmp/host.sock 00:24:10.428 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1108753 ']' 00:24:10.428 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:10.428 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.428 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:10.428 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:10.428 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.428 12:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.686 [2024-11-15 12:45:50.777911] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:24:10.686 [2024-11-15 12:45:50.777989] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1108753 ] 00:24:10.686 [2024-11-15 12:45:50.842839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.686 [2024-11-15 12:45:50.899042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.946 12:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.946 12:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:10.946 12:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.946 12:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:10.946 12:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.946 12:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.946 12:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.946 12:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:10.946 12:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.946 12:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.946 12:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.946 12:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:10.946 12:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.946 12:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:12.320 [2024-11-15 12:45:52.230887] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:12.320 [2024-11-15 12:45:52.230921] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:12.320 [2024-11-15 12:45:52.230950] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:12.320 [2024-11-15 12:45:52.317272] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:12.320 [2024-11-15 12:45:52.419199] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:12.320 [2024-11-15 12:45:52.420180] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xffdbe0:1 started. 00:24:12.320 [2024-11-15 12:45:52.421845] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:12.320 [2024-11-15 12:45:52.421898] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:12.320 [2024-11-15 12:45:52.421932] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:12.320 [2024-11-15 12:45:52.421956] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:12.320 [2024-11-15 12:45:52.421994] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:12.320 [2024-11-15 12:45:52.428760] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xffdbe0 was disconnected and freed. delete nvme_qpair. 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:12.320 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:12.321 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:12.321 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.321 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:12.321 12:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:13.255 12:45:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:13.255 12:45:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.255 12:45:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:13.255 12:45:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.255 12:45:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:13.255 12:45:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:13.255 12:45:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:13.255 12:45:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.513 12:45:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:13.513 12:45:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:14.446 12:45:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:14.446 12:45:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.446 12:45:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:14.446 12:45:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.446 12:45:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:14.446 12:45:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.446 12:45:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:14.446 12:45:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.446 12:45:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:14.446 12:45:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:15.380 12:45:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:15.380 12:45:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.380 12:45:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:15.380 12:45:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.380 12:45:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:15.380 12:45:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.380 12:45:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:15.380 12:45:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.380 12:45:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:15.380 12:45:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:16.755 12:45:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:16.755 12:45:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.755 12:45:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:16.755 12:45:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.755 12:45:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:16.755 12:45:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.755 12:45:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:16.755 12:45:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.755 12:45:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:16.755 12:45:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:17.691 12:45:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:17.691 12:45:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.691 12:45:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:17.691 12:45:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.691 12:45:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:17.691 12:45:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.691 12:45:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:17.692 12:45:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.692 12:45:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:17.692 12:45:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:17.692 [2024-11-15 12:45:57.863271] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:17.692 [2024-11-15 12:45:57.863347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.692 [2024-11-15 12:45:57.863370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.692 [2024-11-15 12:45:57.863388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.692 [2024-11-15 12:45:57.863401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.692 [2024-11-15 12:45:57.863414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.692 [2024-11-15 12:45:57.863427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.692 [2024-11-15 12:45:57.863440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.692 [2024-11-15 12:45:57.863453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.692 [2024-11-15 12:45:57.863467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.692 [2024-11-15 12:45:57.863479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.692 [2024-11-15 12:45:57.863492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfda400 is same with the state(6) to be set 00:24:17.692 [2024-11-15 12:45:57.873289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfda400 (9): Bad file descriptor 00:24:17.692 [2024-11-15 12:45:57.883334] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:17.692 [2024-11-15 12:45:57.883356] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:17.692 [2024-11-15 12:45:57.883366] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:17.692 [2024-11-15 12:45:57.883374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:17.692 [2024-11-15 12:45:57.883424] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:18.626 12:45:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:18.626 12:45:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.626 12:45:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:18.626 12:45:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.626 12:45:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:18.626 12:45:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:18.626 12:45:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:18.626 [2024-11-15 12:45:58.935761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:18.626 [2024-11-15 12:45:58.935828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfda400 with addr=10.0.0.2, port=4420 00:24:18.626 [2024-11-15 12:45:58.935851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfda400 is same with the state(6) to be set 00:24:18.626 [2024-11-15 12:45:58.935889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfda400 (9): Bad file descriptor 00:24:18.626 [2024-11-15 12:45:58.936313] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:18.626 [2024-11-15 12:45:58.936353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:18.626 [2024-11-15 12:45:58.936370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:18.626 [2024-11-15 12:45:58.936384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:18.626 [2024-11-15 12:45:58.936397] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:18.626 [2024-11-15 12:45:58.936408] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:18.626 [2024-11-15 12:45:58.936416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:18.626 [2024-11-15 12:45:58.936429] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:18.626 [2024-11-15 12:45:58.936438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:18.626 12:45:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.626 12:45:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:18.626 12:45:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:20.000 [2024-11-15 12:45:59.938934] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:20.000 [2024-11-15 12:45:59.938980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:20.000 [2024-11-15 12:45:59.939005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:20.000 [2024-11-15 12:45:59.939042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:20.000 [2024-11-15 12:45:59.939057] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:20.000 [2024-11-15 12:45:59.939070] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:20.000 [2024-11-15 12:45:59.939096] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:20.000 [2024-11-15 12:45:59.939104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:20.000 [2024-11-15 12:45:59.939150] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:20.000 [2024-11-15 12:45:59.939207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.000 [2024-11-15 12:45:59.939229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-11-15 12:45:59.939246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.000 [2024-11-15 12:45:59.939258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-11-15 12:45:59.939271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.000 [2024-11-15 12:45:59.939283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-11-15 12:45:59.939296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.000 [2024-11-15 12:45:59.939308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-11-15 12:45:59.939321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.000 [2024-11-15 12:45:59.939333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.000 [2024-11-15 12:45:59.939345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:20.000 [2024-11-15 12:45:59.939429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc9b40 (9): Bad file descriptor 00:24:20.000 [2024-11-15 12:45:59.940461] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:20.000 [2024-11-15 12:45:59.940481] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:20.000 12:45:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:20.000 12:45:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.000 12:45:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:20.000 12:45:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.000 12:45:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:20.000 12:45:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.000 12:45:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.000 12:45:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.000 12:45:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:20.001 12:45:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.001 12:46:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.001 12:46:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:20.001 12:46:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:20.001 12:46:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.001 12:46:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:20.001 12:46:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.001 12:46:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:20.001 12:46:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.001 12:46:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.001 12:46:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.001 12:46:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:20.001 12:46:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:20.934 12:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:20.934 12:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.934 12:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:20.934 12:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.934 12:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:20.934 12:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.934 12:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.934 12:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.934 12:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:20.934 12:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:21.867 [2024-11-15 12:46:01.995932] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:21.867 [2024-11-15 12:46:01.995961] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:21.867 [2024-11-15 12:46:01.995984] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:21.867 [2024-11-15 12:46:02.082263] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:21.867 12:46:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:21.867 12:46:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.867 12:46:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:21.868 12:46:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.868 12:46:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.868 12:46:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:21.868 12:46:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:21.868 12:46:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.868 12:46:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:21.868 12:46:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:22.135 [2024-11-15 12:46:02.297551] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:22.135 [2024-11-15 12:46:02.298326] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xfe4bd0:1 started. 00:24:22.135 [2024-11-15 12:46:02.299714] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:22.135 [2024-11-15 12:46:02.299763] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:22.135 [2024-11-15 12:46:02.299792] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:22.135 [2024-11-15 12:46:02.299814] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:22.135 [2024-11-15 12:46:02.299827] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:22.135 [2024-11-15 12:46:02.345392] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xfe4bd0 was disconnected and freed. delete nvme_qpair. 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1108753 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1108753 ']' 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1108753 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1108753 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1108753' 00:24:23.208 killing process with pid 1108753 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1108753 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1108753 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:23.208 rmmod nvme_tcp 00:24:23.208 rmmod nvme_fabrics 00:24:23.208 rmmod nvme_keyring 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1108728 ']' 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1108728 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1108728 ']' 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1108728 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.208 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1108728 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1108728' 00:24:23.477 killing process with pid 1108728 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1108728 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1108728 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.477 12:46:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.012 12:46:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:26.012 00:24:26.012 real 0m17.829s 00:24:26.012 user 0m25.958s 00:24:26.012 sys 0m3.038s 00:24:26.012 12:46:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.012 12:46:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.012 ************************************ 00:24:26.012 END TEST nvmf_discovery_remove_ifc 00:24:26.012 ************************************ 00:24:26.012 12:46:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:26.012 12:46:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:26.012 12:46:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.012 12:46:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.012 ************************************ 00:24:26.012 START TEST nvmf_identify_kernel_target 00:24:26.012 ************************************ 00:24:26.012 12:46:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:26.012 * Looking for test storage... 00:24:26.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:26.012 12:46:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:26.012 12:46:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:24:26.012 12:46:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:26.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.012 --rc genhtml_branch_coverage=1 00:24:26.012 --rc genhtml_function_coverage=1 00:24:26.012 --rc genhtml_legend=1 00:24:26.012 --rc geninfo_all_blocks=1 00:24:26.012 --rc geninfo_unexecuted_blocks=1 00:24:26.012 00:24:26.012 ' 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:26.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.012 --rc genhtml_branch_coverage=1 00:24:26.012 --rc genhtml_function_coverage=1 00:24:26.012 --rc genhtml_legend=1 00:24:26.012 --rc geninfo_all_blocks=1 00:24:26.012 --rc geninfo_unexecuted_blocks=1 00:24:26.012 00:24:26.012 ' 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:26.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.012 --rc genhtml_branch_coverage=1 00:24:26.012 --rc genhtml_function_coverage=1 00:24:26.012 --rc genhtml_legend=1 00:24:26.012 --rc geninfo_all_blocks=1 00:24:26.012 --rc geninfo_unexecuted_blocks=1 00:24:26.012 00:24:26.012 ' 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:26.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.012 --rc genhtml_branch_coverage=1 00:24:26.012 --rc genhtml_function_coverage=1 00:24:26.012 --rc genhtml_legend=1 00:24:26.012 --rc geninfo_all_blocks=1 00:24:26.012 --rc geninfo_unexecuted_blocks=1 00:24:26.012 00:24:26.012 ' 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.012 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:26.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.013 12:46:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.541 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:28.542 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:28.542 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:28.542 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:28.542 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:28.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:24:28.542 00:24:28.542 --- 10.0.0.2 ping statistics --- 00:24:28.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.542 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:24:28.542 00:24:28.542 --- 10.0.0.1 ping statistics --- 00:24:28.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.542 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:28.542 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:28.543 12:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:29.553 Waiting for block devices as requested 00:24:29.553 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:29.553 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:29.810 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:29.810 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:29.810 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:30.069 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:30.069 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:30.069 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:30.069 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:30.328 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:30.328 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:30.328 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:30.328 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:30.586 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:30.586 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:30.586 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:30.586 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:30.843 No valid GPT data, bailing 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:30.843 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:30.844 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:30.844 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:30.844 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:30.844 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:30.844 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:30.844 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:30.844 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:30.844 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:30.844 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:30.844 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:30.844 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:31.103 00:24:31.103 Discovery Log Number of Records 2, Generation counter 2 00:24:31.103 =====Discovery Log Entry 0====== 00:24:31.103 trtype: tcp 00:24:31.103 adrfam: ipv4 00:24:31.103 subtype: current discovery subsystem 00:24:31.103 treq: not specified, sq flow control disable supported 00:24:31.103 portid: 1 00:24:31.103 trsvcid: 4420 00:24:31.103 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:31.103 traddr: 10.0.0.1 00:24:31.103 eflags: none 00:24:31.103 sectype: none 00:24:31.103 =====Discovery Log Entry 1====== 00:24:31.103 trtype: tcp 00:24:31.103 adrfam: ipv4 00:24:31.103 subtype: nvme subsystem 00:24:31.103 treq: not specified, sq flow control disable supported 00:24:31.103 portid: 1 00:24:31.103 trsvcid: 4420 00:24:31.103 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:31.103 traddr: 10.0.0.1 00:24:31.103 eflags: none 00:24:31.103 sectype: none 00:24:31.103 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:31.103 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:31.103 ===================================================== 00:24:31.103 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:31.103 ===================================================== 00:24:31.103 Controller Capabilities/Features 00:24:31.103 ================================ 00:24:31.103 Vendor ID: 0000 00:24:31.103 Subsystem Vendor ID: 0000 00:24:31.103 Serial Number: 69deed1be0d966ce5a7c 00:24:31.103 Model Number: Linux 00:24:31.103 Firmware Version: 6.8.9-20 00:24:31.103 Recommended Arb Burst: 0 00:24:31.103 IEEE OUI Identifier: 00 00 00 00:24:31.103 Multi-path I/O 00:24:31.103 May have multiple subsystem ports: No 00:24:31.103 May have multiple controllers: No 00:24:31.103 Associated with SR-IOV VF: No 00:24:31.103 Max Data Transfer Size: Unlimited 00:24:31.103 Max Number of Namespaces: 0 00:24:31.103 Max Number of I/O Queues: 1024 00:24:31.103 NVMe Specification Version (VS): 1.3 00:24:31.103 NVMe Specification Version (Identify): 1.3 00:24:31.103 Maximum Queue Entries: 1024 00:24:31.103 Contiguous Queues Required: No 00:24:31.103 Arbitration Mechanisms Supported 00:24:31.103 Weighted Round Robin: Not Supported 00:24:31.103 Vendor Specific: Not Supported 00:24:31.103 Reset Timeout: 7500 ms 00:24:31.103 Doorbell Stride: 4 bytes 00:24:31.104 NVM Subsystem Reset: Not Supported 00:24:31.104 Command Sets Supported 00:24:31.104 NVM Command Set: Supported 00:24:31.104 Boot Partition: Not Supported 00:24:31.104 Memory Page Size Minimum: 4096 bytes 00:24:31.104 Memory Page Size Maximum: 4096 bytes 00:24:31.104 Persistent Memory Region: Not Supported 00:24:31.104 Optional Asynchronous Events Supported 00:24:31.104 Namespace Attribute Notices: Not Supported 00:24:31.104 Firmware Activation Notices: Not Supported 00:24:31.104 ANA Change Notices: Not Supported 00:24:31.104 PLE Aggregate Log Change Notices: Not Supported 00:24:31.104 LBA Status Info Alert Notices: Not Supported 00:24:31.104 EGE Aggregate Log Change Notices: Not Supported 00:24:31.104 Normal NVM Subsystem Shutdown event: Not Supported 00:24:31.104 Zone Descriptor Change Notices: Not Supported 00:24:31.104 Discovery Log Change Notices: Supported 00:24:31.104 Controller Attributes 00:24:31.104 128-bit Host Identifier: Not Supported 00:24:31.104 Non-Operational Permissive Mode: Not Supported 00:24:31.104 NVM Sets: Not Supported 00:24:31.104 Read Recovery Levels: Not Supported 00:24:31.104 Endurance Groups: Not Supported 00:24:31.104 Predictable Latency Mode: Not Supported 00:24:31.104 Traffic Based Keep ALive: Not Supported 00:24:31.104 Namespace Granularity: Not Supported 00:24:31.104 SQ Associations: Not Supported 00:24:31.104 UUID List: Not Supported 00:24:31.104 Multi-Domain Subsystem: Not Supported 00:24:31.104 Fixed Capacity Management: Not Supported 00:24:31.104 Variable Capacity Management: Not Supported 00:24:31.104 Delete Endurance Group: Not Supported 00:24:31.104 Delete NVM Set: Not Supported 00:24:31.104 Extended LBA Formats Supported: Not Supported 00:24:31.104 Flexible Data Placement Supported: Not Supported 00:24:31.104 00:24:31.104 Controller Memory Buffer Support 00:24:31.104 ================================ 00:24:31.104 Supported: No 00:24:31.104 00:24:31.104 Persistent Memory Region Support 00:24:31.104 ================================ 00:24:31.104 Supported: No 00:24:31.104 00:24:31.104 Admin Command Set Attributes 00:24:31.104 ============================ 00:24:31.104 Security Send/Receive: Not Supported 00:24:31.104 Format NVM: Not Supported 00:24:31.104 Firmware Activate/Download: Not Supported 00:24:31.104 Namespace Management: Not Supported 00:24:31.104 Device Self-Test: Not Supported 00:24:31.104 Directives: Not Supported 00:24:31.104 NVMe-MI: Not Supported 00:24:31.104 Virtualization Management: Not Supported 00:24:31.104 Doorbell Buffer Config: Not Supported 00:24:31.104 Get LBA Status Capability: Not Supported 00:24:31.104 Command & Feature Lockdown Capability: Not Supported 00:24:31.104 Abort Command Limit: 1 00:24:31.104 Async Event Request Limit: 1 00:24:31.104 Number of Firmware Slots: N/A 00:24:31.104 Firmware Slot 1 Read-Only: N/A 00:24:31.104 Firmware Activation Without Reset: N/A 00:24:31.104 Multiple Update Detection Support: N/A 00:24:31.104 Firmware Update Granularity: No Information Provided 00:24:31.104 Per-Namespace SMART Log: No 00:24:31.104 Asymmetric Namespace Access Log Page: Not Supported 00:24:31.104 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:31.104 Command Effects Log Page: Not Supported 00:24:31.104 Get Log Page Extended Data: Supported 00:24:31.104 Telemetry Log Pages: Not Supported 00:24:31.104 Persistent Event Log Pages: Not Supported 00:24:31.104 Supported Log Pages Log Page: May Support 00:24:31.104 Commands Supported & Effects Log Page: Not Supported 00:24:31.104 Feature Identifiers & Effects Log Page:May Support 00:24:31.104 NVMe-MI Commands & Effects Log Page: May Support 00:24:31.104 Data Area 4 for Telemetry Log: Not Supported 00:24:31.104 Error Log Page Entries Supported: 1 00:24:31.104 Keep Alive: Not Supported 00:24:31.104 00:24:31.104 NVM Command Set Attributes 00:24:31.104 ========================== 00:24:31.104 Submission Queue Entry Size 00:24:31.104 Max: 1 00:24:31.104 Min: 1 00:24:31.104 Completion Queue Entry Size 00:24:31.104 Max: 1 00:24:31.104 Min: 1 00:24:31.104 Number of Namespaces: 0 00:24:31.104 Compare Command: Not Supported 00:24:31.104 Write Uncorrectable Command: Not Supported 00:24:31.104 Dataset Management Command: Not Supported 00:24:31.104 Write Zeroes Command: Not Supported 00:24:31.104 Set Features Save Field: Not Supported 00:24:31.104 Reservations: Not Supported 00:24:31.104 Timestamp: Not Supported 00:24:31.104 Copy: Not Supported 00:24:31.104 Volatile Write Cache: Not Present 00:24:31.104 Atomic Write Unit (Normal): 1 00:24:31.104 Atomic Write Unit (PFail): 1 00:24:31.104 Atomic Compare & Write Unit: 1 00:24:31.104 Fused Compare & Write: Not Supported 00:24:31.104 Scatter-Gather List 00:24:31.104 SGL Command Set: Supported 00:24:31.104 SGL Keyed: Not Supported 00:24:31.104 SGL Bit Bucket Descriptor: Not Supported 00:24:31.104 SGL Metadata Pointer: Not Supported 00:24:31.104 Oversized SGL: Not Supported 00:24:31.104 SGL Metadata Address: Not Supported 00:24:31.104 SGL Offset: Supported 00:24:31.104 Transport SGL Data Block: Not Supported 00:24:31.104 Replay Protected Memory Block: Not Supported 00:24:31.104 00:24:31.104 Firmware Slot Information 00:24:31.104 ========================= 00:24:31.104 Active slot: 0 00:24:31.104 00:24:31.104 00:24:31.104 Error Log 00:24:31.104 ========= 00:24:31.104 00:24:31.104 Active Namespaces 00:24:31.104 ================= 00:24:31.104 Discovery Log Page 00:24:31.104 ================== 00:24:31.104 Generation Counter: 2 00:24:31.104 Number of Records: 2 00:24:31.104 Record Format: 0 00:24:31.104 00:24:31.104 Discovery Log Entry 0 00:24:31.104 ---------------------- 00:24:31.104 Transport Type: 3 (TCP) 00:24:31.104 Address Family: 1 (IPv4) 00:24:31.104 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:31.104 Entry Flags: 00:24:31.104 Duplicate Returned Information: 0 00:24:31.104 Explicit Persistent Connection Support for Discovery: 0 00:24:31.104 Transport Requirements: 00:24:31.104 Secure Channel: Not Specified 00:24:31.104 Port ID: 1 (0x0001) 00:24:31.104 Controller ID: 65535 (0xffff) 00:24:31.104 Admin Max SQ Size: 32 00:24:31.104 Transport Service Identifier: 4420 00:24:31.104 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:31.104 Transport Address: 10.0.0.1 00:24:31.104 Discovery Log Entry 1 00:24:31.104 ---------------------- 00:24:31.104 Transport Type: 3 (TCP) 00:24:31.104 Address Family: 1 (IPv4) 00:24:31.104 Subsystem Type: 2 (NVM Subsystem) 00:24:31.104 Entry Flags: 00:24:31.104 Duplicate Returned Information: 0 00:24:31.104 Explicit Persistent Connection Support for Discovery: 0 00:24:31.104 Transport Requirements: 00:24:31.104 Secure Channel: Not Specified 00:24:31.104 Port ID: 1 (0x0001) 00:24:31.104 Controller ID: 65535 (0xffff) 00:24:31.104 Admin Max SQ Size: 32 00:24:31.104 Transport Service Identifier: 4420 00:24:31.104 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:31.104 Transport Address: 10.0.0.1 00:24:31.104 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:31.104 get_feature(0x01) failed 00:24:31.104 get_feature(0x02) failed 00:24:31.104 get_feature(0x04) failed 00:24:31.104 ===================================================== 00:24:31.104 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:31.104 ===================================================== 00:24:31.104 Controller Capabilities/Features 00:24:31.104 ================================ 00:24:31.104 Vendor ID: 0000 00:24:31.104 Subsystem Vendor ID: 0000 00:24:31.104 Serial Number: 7848c67fb306bf26192d 00:24:31.104 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:31.104 Firmware Version: 6.8.9-20 00:24:31.104 Recommended Arb Burst: 6 00:24:31.104 IEEE OUI Identifier: 00 00 00 00:24:31.104 Multi-path I/O 00:24:31.104 May have multiple subsystem ports: Yes 00:24:31.104 May have multiple controllers: Yes 00:24:31.104 Associated with SR-IOV VF: No 00:24:31.104 Max Data Transfer Size: Unlimited 00:24:31.104 Max Number of Namespaces: 1024 00:24:31.104 Max Number of I/O Queues: 128 00:24:31.104 NVMe Specification Version (VS): 1.3 00:24:31.104 NVMe Specification Version (Identify): 1.3 00:24:31.104 Maximum Queue Entries: 1024 00:24:31.104 Contiguous Queues Required: No 00:24:31.104 Arbitration Mechanisms Supported 00:24:31.104 Weighted Round Robin: Not Supported 00:24:31.104 Vendor Specific: Not Supported 00:24:31.104 Reset Timeout: 7500 ms 00:24:31.104 Doorbell Stride: 4 bytes 00:24:31.104 NVM Subsystem Reset: Not Supported 00:24:31.104 Command Sets Supported 00:24:31.104 NVM Command Set: Supported 00:24:31.104 Boot Partition: Not Supported 00:24:31.104 Memory Page Size Minimum: 4096 bytes 00:24:31.105 Memory Page Size Maximum: 4096 bytes 00:24:31.105 Persistent Memory Region: Not Supported 00:24:31.105 Optional Asynchronous Events Supported 00:24:31.105 Namespace Attribute Notices: Supported 00:24:31.105 Firmware Activation Notices: Not Supported 00:24:31.105 ANA Change Notices: Supported 00:24:31.105 PLE Aggregate Log Change Notices: Not Supported 00:24:31.105 LBA Status Info Alert Notices: Not Supported 00:24:31.105 EGE Aggregate Log Change Notices: Not Supported 00:24:31.105 Normal NVM Subsystem Shutdown event: Not Supported 00:24:31.105 Zone Descriptor Change Notices: Not Supported 00:24:31.105 Discovery Log Change Notices: Not Supported 00:24:31.105 Controller Attributes 00:24:31.105 128-bit Host Identifier: Supported 00:24:31.105 Non-Operational Permissive Mode: Not Supported 00:24:31.105 NVM Sets: Not Supported 00:24:31.105 Read Recovery Levels: Not Supported 00:24:31.105 Endurance Groups: Not Supported 00:24:31.105 Predictable Latency Mode: Not Supported 00:24:31.105 Traffic Based Keep ALive: Supported 00:24:31.105 Namespace Granularity: Not Supported 00:24:31.105 SQ Associations: Not Supported 00:24:31.105 UUID List: Not Supported 00:24:31.105 Multi-Domain Subsystem: Not Supported 00:24:31.105 Fixed Capacity Management: Not Supported 00:24:31.105 Variable Capacity Management: Not Supported 00:24:31.105 Delete Endurance Group: Not Supported 00:24:31.105 Delete NVM Set: Not Supported 00:24:31.105 Extended LBA Formats Supported: Not Supported 00:24:31.105 Flexible Data Placement Supported: Not Supported 00:24:31.105 00:24:31.105 Controller Memory Buffer Support 00:24:31.105 ================================ 00:24:31.105 Supported: No 00:24:31.105 00:24:31.105 Persistent Memory Region Support 00:24:31.105 ================================ 00:24:31.105 Supported: No 00:24:31.105 00:24:31.105 Admin Command Set Attributes 00:24:31.105 ============================ 00:24:31.105 Security Send/Receive: Not Supported 00:24:31.105 Format NVM: Not Supported 00:24:31.105 Firmware Activate/Download: Not Supported 00:24:31.105 Namespace Management: Not Supported 00:24:31.105 Device Self-Test: Not Supported 00:24:31.105 Directives: Not Supported 00:24:31.105 NVMe-MI: Not Supported 00:24:31.105 Virtualization Management: Not Supported 00:24:31.105 Doorbell Buffer Config: Not Supported 00:24:31.105 Get LBA Status Capability: Not Supported 00:24:31.105 Command & Feature Lockdown Capability: Not Supported 00:24:31.105 Abort Command Limit: 4 00:24:31.105 Async Event Request Limit: 4 00:24:31.105 Number of Firmware Slots: N/A 00:24:31.105 Firmware Slot 1 Read-Only: N/A 00:24:31.105 Firmware Activation Without Reset: N/A 00:24:31.105 Multiple Update Detection Support: N/A 00:24:31.105 Firmware Update Granularity: No Information Provided 00:24:31.105 Per-Namespace SMART Log: Yes 00:24:31.105 Asymmetric Namespace Access Log Page: Supported 00:24:31.105 ANA Transition Time : 10 sec 00:24:31.105 00:24:31.105 Asymmetric Namespace Access Capabilities 00:24:31.105 ANA Optimized State : Supported 00:24:31.105 ANA Non-Optimized State : Supported 00:24:31.105 ANA Inaccessible State : Supported 00:24:31.105 ANA Persistent Loss State : Supported 00:24:31.105 ANA Change State : Supported 00:24:31.105 ANAGRPID is not changed : No 00:24:31.105 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:31.105 00:24:31.105 ANA Group Identifier Maximum : 128 00:24:31.105 Number of ANA Group Identifiers : 128 00:24:31.105 Max Number of Allowed Namespaces : 1024 00:24:31.105 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:31.105 Command Effects Log Page: Supported 00:24:31.105 Get Log Page Extended Data: Supported 00:24:31.105 Telemetry Log Pages: Not Supported 00:24:31.105 Persistent Event Log Pages: Not Supported 00:24:31.105 Supported Log Pages Log Page: May Support 00:24:31.105 Commands Supported & Effects Log Page: Not Supported 00:24:31.105 Feature Identifiers & Effects Log Page:May Support 00:24:31.105 NVMe-MI Commands & Effects Log Page: May Support 00:24:31.105 Data Area 4 for Telemetry Log: Not Supported 00:24:31.105 Error Log Page Entries Supported: 128 00:24:31.105 Keep Alive: Supported 00:24:31.105 Keep Alive Granularity: 1000 ms 00:24:31.105 00:24:31.105 NVM Command Set Attributes 00:24:31.105 ========================== 00:24:31.105 Submission Queue Entry Size 00:24:31.105 Max: 64 00:24:31.105 Min: 64 00:24:31.105 Completion Queue Entry Size 00:24:31.105 Max: 16 00:24:31.105 Min: 16 00:24:31.105 Number of Namespaces: 1024 00:24:31.105 Compare Command: Not Supported 00:24:31.105 Write Uncorrectable Command: Not Supported 00:24:31.105 Dataset Management Command: Supported 00:24:31.105 Write Zeroes Command: Supported 00:24:31.105 Set Features Save Field: Not Supported 00:24:31.105 Reservations: Not Supported 00:24:31.105 Timestamp: Not Supported 00:24:31.105 Copy: Not Supported 00:24:31.105 Volatile Write Cache: Present 00:24:31.105 Atomic Write Unit (Normal): 1 00:24:31.105 Atomic Write Unit (PFail): 1 00:24:31.105 Atomic Compare & Write Unit: 1 00:24:31.105 Fused Compare & Write: Not Supported 00:24:31.105 Scatter-Gather List 00:24:31.105 SGL Command Set: Supported 00:24:31.105 SGL Keyed: Not Supported 00:24:31.105 SGL Bit Bucket Descriptor: Not Supported 00:24:31.105 SGL Metadata Pointer: Not Supported 00:24:31.105 Oversized SGL: Not Supported 00:24:31.105 SGL Metadata Address: Not Supported 00:24:31.105 SGL Offset: Supported 00:24:31.105 Transport SGL Data Block: Not Supported 00:24:31.105 Replay Protected Memory Block: Not Supported 00:24:31.105 00:24:31.105 Firmware Slot Information 00:24:31.105 ========================= 00:24:31.105 Active slot: 0 00:24:31.105 00:24:31.105 Asymmetric Namespace Access 00:24:31.105 =========================== 00:24:31.105 Change Count : 0 00:24:31.105 Number of ANA Group Descriptors : 1 00:24:31.105 ANA Group Descriptor : 0 00:24:31.105 ANA Group ID : 1 00:24:31.105 Number of NSID Values : 1 00:24:31.105 Change Count : 0 00:24:31.105 ANA State : 1 00:24:31.105 Namespace Identifier : 1 00:24:31.105 00:24:31.105 Commands Supported and Effects 00:24:31.105 ============================== 00:24:31.105 Admin Commands 00:24:31.105 -------------- 00:24:31.105 Get Log Page (02h): Supported 00:24:31.105 Identify (06h): Supported 00:24:31.105 Abort (08h): Supported 00:24:31.105 Set Features (09h): Supported 00:24:31.105 Get Features (0Ah): Supported 00:24:31.105 Asynchronous Event Request (0Ch): Supported 00:24:31.105 Keep Alive (18h): Supported 00:24:31.105 I/O Commands 00:24:31.105 ------------ 00:24:31.105 Flush (00h): Supported 00:24:31.105 Write (01h): Supported LBA-Change 00:24:31.105 Read (02h): Supported 00:24:31.105 Write Zeroes (08h): Supported LBA-Change 00:24:31.105 Dataset Management (09h): Supported 00:24:31.105 00:24:31.105 Error Log 00:24:31.105 ========= 00:24:31.105 Entry: 0 00:24:31.105 Error Count: 0x3 00:24:31.105 Submission Queue Id: 0x0 00:24:31.105 Command Id: 0x5 00:24:31.105 Phase Bit: 0 00:24:31.105 Status Code: 0x2 00:24:31.105 Status Code Type: 0x0 00:24:31.105 Do Not Retry: 1 00:24:31.105 Error Location: 0x28 00:24:31.105 LBA: 0x0 00:24:31.105 Namespace: 0x0 00:24:31.105 Vendor Log Page: 0x0 00:24:31.105 ----------- 00:24:31.105 Entry: 1 00:24:31.105 Error Count: 0x2 00:24:31.105 Submission Queue Id: 0x0 00:24:31.105 Command Id: 0x5 00:24:31.105 Phase Bit: 0 00:24:31.105 Status Code: 0x2 00:24:31.105 Status Code Type: 0x0 00:24:31.105 Do Not Retry: 1 00:24:31.105 Error Location: 0x28 00:24:31.105 LBA: 0x0 00:24:31.105 Namespace: 0x0 00:24:31.105 Vendor Log Page: 0x0 00:24:31.105 ----------- 00:24:31.105 Entry: 2 00:24:31.105 Error Count: 0x1 00:24:31.105 Submission Queue Id: 0x0 00:24:31.105 Command Id: 0x4 00:24:31.105 Phase Bit: 0 00:24:31.105 Status Code: 0x2 00:24:31.105 Status Code Type: 0x0 00:24:31.105 Do Not Retry: 1 00:24:31.105 Error Location: 0x28 00:24:31.105 LBA: 0x0 00:24:31.105 Namespace: 0x0 00:24:31.105 Vendor Log Page: 0x0 00:24:31.105 00:24:31.105 Number of Queues 00:24:31.105 ================ 00:24:31.105 Number of I/O Submission Queues: 128 00:24:31.105 Number of I/O Completion Queues: 128 00:24:31.105 00:24:31.105 ZNS Specific Controller Data 00:24:31.105 ============================ 00:24:31.105 Zone Append Size Limit: 0 00:24:31.105 00:24:31.105 00:24:31.105 Active Namespaces 00:24:31.105 ================= 00:24:31.105 get_feature(0x05) failed 00:24:31.105 Namespace ID:1 00:24:31.105 Command Set Identifier: NVM (00h) 00:24:31.105 Deallocate: Supported 00:24:31.105 Deallocated/Unwritten Error: Not Supported 00:24:31.105 Deallocated Read Value: Unknown 00:24:31.105 Deallocate in Write Zeroes: Not Supported 00:24:31.106 Deallocated Guard Field: 0xFFFF 00:24:31.106 Flush: Supported 00:24:31.106 Reservation: Not Supported 00:24:31.106 Namespace Sharing Capabilities: Multiple Controllers 00:24:31.106 Size (in LBAs): 1953525168 (931GiB) 00:24:31.106 Capacity (in LBAs): 1953525168 (931GiB) 00:24:31.106 Utilization (in LBAs): 1953525168 (931GiB) 00:24:31.106 UUID: 5e533769-05fd-41e3-bcb2-b7d08bd85fbe 00:24:31.106 Thin Provisioning: Not Supported 00:24:31.106 Per-NS Atomic Units: Yes 00:24:31.106 Atomic Boundary Size (Normal): 0 00:24:31.106 Atomic Boundary Size (PFail): 0 00:24:31.106 Atomic Boundary Offset: 0 00:24:31.106 NGUID/EUI64 Never Reused: No 00:24:31.106 ANA group ID: 1 00:24:31.106 Namespace Write Protected: No 00:24:31.106 Number of LBA Formats: 1 00:24:31.106 Current LBA Format: LBA Format #00 00:24:31.106 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:31.106 00:24:31.106 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:31.106 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:31.106 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:31.106 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:31.106 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:31.106 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:31.106 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:31.106 rmmod nvme_tcp 00:24:31.106 rmmod nvme_fabrics 00:24:31.106 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:31.365 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:31.365 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:31.365 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:31.365 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:31.365 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:31.365 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:31.365 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:31.365 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:31.366 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:31.366 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:31.366 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:31.366 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:31.366 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.366 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.366 12:46:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.272 12:46:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:33.272 12:46:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:33.272 12:46:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:33.272 12:46:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:33.272 12:46:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:33.272 12:46:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:33.272 12:46:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:33.272 12:46:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:33.272 12:46:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:33.272 12:46:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:33.272 12:46:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:34.649 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:34.649 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:34.649 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:34.649 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:34.649 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:34.649 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:34.649 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:34.649 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:34.649 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:34.649 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:34.649 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:34.649 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:34.649 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:34.649 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:34.649 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:34.649 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:35.586 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:35.844 00:24:35.844 real 0m10.043s 00:24:35.844 user 0m2.206s 00:24:35.844 sys 0m3.832s 00:24:35.844 12:46:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:35.845 12:46:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.845 ************************************ 00:24:35.845 END TEST nvmf_identify_kernel_target 00:24:35.845 ************************************ 00:24:35.845 12:46:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:35.845 12:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:35.845 12:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:35.845 12:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.845 ************************************ 00:24:35.845 START TEST nvmf_auth_host 00:24:35.845 ************************************ 00:24:35.845 12:46:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:35.845 * Looking for test storage... 00:24:35.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:35.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.845 --rc genhtml_branch_coverage=1 00:24:35.845 --rc genhtml_function_coverage=1 00:24:35.845 --rc genhtml_legend=1 00:24:35.845 --rc geninfo_all_blocks=1 00:24:35.845 --rc geninfo_unexecuted_blocks=1 00:24:35.845 00:24:35.845 ' 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:35.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.845 --rc genhtml_branch_coverage=1 00:24:35.845 --rc genhtml_function_coverage=1 00:24:35.845 --rc genhtml_legend=1 00:24:35.845 --rc geninfo_all_blocks=1 00:24:35.845 --rc geninfo_unexecuted_blocks=1 00:24:35.845 00:24:35.845 ' 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:35.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.845 --rc genhtml_branch_coverage=1 00:24:35.845 --rc genhtml_function_coverage=1 00:24:35.845 --rc genhtml_legend=1 00:24:35.845 --rc geninfo_all_blocks=1 00:24:35.845 --rc geninfo_unexecuted_blocks=1 00:24:35.845 00:24:35.845 ' 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:35.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.845 --rc genhtml_branch_coverage=1 00:24:35.845 --rc genhtml_function_coverage=1 00:24:35.845 --rc genhtml_legend=1 00:24:35.845 --rc geninfo_all_blocks=1 00:24:35.845 --rc geninfo_unexecuted_blocks=1 00:24:35.845 00:24:35.845 ' 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.845 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:35.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:35.846 12:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:38.373 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:38.373 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:38.373 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:38.373 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.373 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:24:38.374 00:24:38.374 --- 10.0.0.2 ping statistics --- 00:24:38.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.374 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:24:38.374 00:24:38.374 --- 10.0.0.1 ping statistics --- 00:24:38.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.374 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1115977 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1115977 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1115977 ']' 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=83b82734ce632f78a96c107e465c8dd2 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4V9 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 83b82734ce632f78a96c107e465c8dd2 0 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 83b82734ce632f78a96c107e465c8dd2 0 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=83b82734ce632f78a96c107e465c8dd2 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4V9 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4V9 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.4V9 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8b3897b5149af7349f86ae94179625631e5e3282328ad38e2f305151af7a92ee 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.6uS 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8b3897b5149af7349f86ae94179625631e5e3282328ad38e2f305151af7a92ee 3 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8b3897b5149af7349f86ae94179625631e5e3282328ad38e2f305151af7a92ee 3 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8b3897b5149af7349f86ae94179625631e5e3282328ad38e2f305151af7a92ee 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:38.374 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.6uS 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.6uS 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.6uS 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=651674af5b385e8392b4b6e6e925f59bd92cf388c436dd0d 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yEw 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 651674af5b385e8392b4b6e6e925f59bd92cf388c436dd0d 0 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 651674af5b385e8392b4b6e6e925f59bd92cf388c436dd0d 0 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=651674af5b385e8392b4b6e6e925f59bd92cf388c436dd0d 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yEw 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yEw 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.yEw 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fd46e5fbcb7612819df735ff55ea94b2809444a08775522b 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3e0 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fd46e5fbcb7612819df735ff55ea94b2809444a08775522b 2 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fd46e5fbcb7612819df735ff55ea94b2809444a08775522b 2 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fd46e5fbcb7612819df735ff55ea94b2809444a08775522b 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3e0 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3e0 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.3e0 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4d064c6b03b7b051bbb661e584899749 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.8Z6 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4d064c6b03b7b051bbb661e584899749 1 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4d064c6b03b7b051bbb661e584899749 1 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4d064c6b03b7b051bbb661e584899749 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.8Z6 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.8Z6 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.8Z6 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eded70ddea764e1125eafad1c667bcc6 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.mAt 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eded70ddea764e1125eafad1c667bcc6 1 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eded70ddea764e1125eafad1c667bcc6 1 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eded70ddea764e1125eafad1c667bcc6 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.mAt 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.mAt 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.mAt 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e42eb69ada57708e6c660f74b4c61ac2d61f991ec5b1b3df 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yWL 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e42eb69ada57708e6c660f74b4c61ac2d61f991ec5b1b3df 2 00:24:38.633 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e42eb69ada57708e6c660f74b4c61ac2d61f991ec5b1b3df 2 00:24:38.634 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.634 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.634 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e42eb69ada57708e6c660f74b4c61ac2d61f991ec5b1b3df 00:24:38.634 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:38.634 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yWL 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yWL 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.yWL 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=56853b9cebd74c980aa149ff62b8665d 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.scV 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 56853b9cebd74c980aa149ff62b8665d 0 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 56853b9cebd74c980aa149ff62b8665d 0 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=56853b9cebd74c980aa149ff62b8665d 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:38.892 12:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.scV 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.scV 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.scV 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7912f117d4dda6614e51fe9b7c4c962bbbf1ab241ffeb55c1e4c03de90429045 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.iBg 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7912f117d4dda6614e51fe9b7c4c962bbbf1ab241ffeb55c1e4c03de90429045 3 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7912f117d4dda6614e51fe9b7c4c962bbbf1ab241ffeb55c1e4c03de90429045 3 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7912f117d4dda6614e51fe9b7c4c962bbbf1ab241ffeb55c1e4c03de90429045 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.iBg 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.iBg 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.iBg 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1115977 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1115977 ']' 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.892 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4V9 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.6uS ]] 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6uS 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.yEw 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.3e0 ]] 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3e0 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.8Z6 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.mAt ]] 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mAt 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.yWL 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.scV ]] 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.scV 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.iBg 00:24:39.151 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:39.152 12:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:40.085 Waiting for block devices as requested 00:24:40.085 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:40.343 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:40.343 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:40.602 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:40.602 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:40.602 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:40.602 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:40.859 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:40.859 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:40.859 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:40.859 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:41.117 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:41.117 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:41.117 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:41.374 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:41.374 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:41.374 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:41.940 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:41.940 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:41.940 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:41.940 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:41.940 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:41.940 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:41.940 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:41.940 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:41.940 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:41.941 No valid GPT data, bailing 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:41.941 00:24:41.941 Discovery Log Number of Records 2, Generation counter 2 00:24:41.941 =====Discovery Log Entry 0====== 00:24:41.941 trtype: tcp 00:24:41.941 adrfam: ipv4 00:24:41.941 subtype: current discovery subsystem 00:24:41.941 treq: not specified, sq flow control disable supported 00:24:41.941 portid: 1 00:24:41.941 trsvcid: 4420 00:24:41.941 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:41.941 traddr: 10.0.0.1 00:24:41.941 eflags: none 00:24:41.941 sectype: none 00:24:41.941 =====Discovery Log Entry 1====== 00:24:41.941 trtype: tcp 00:24:41.941 adrfam: ipv4 00:24:41.941 subtype: nvme subsystem 00:24:41.941 treq: not specified, sq flow control disable supported 00:24:41.941 portid: 1 00:24:41.941 trsvcid: 4420 00:24:41.941 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:41.941 traddr: 10.0.0.1 00:24:41.941 eflags: none 00:24:41.941 sectype: none 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:41.941 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:41.942 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:41.942 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.942 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.199 nvme0n1 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.199 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: ]] 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.200 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.457 nvme0n1 00:24:42.457 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.457 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.457 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.457 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.457 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.457 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.457 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.457 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.458 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.715 nvme0n1 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.715 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.716 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.716 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.716 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.716 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.716 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.716 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.716 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.716 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.716 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.716 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.716 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:42.716 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.716 12:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.973 nvme0n1 00:24:42.973 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.973 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.973 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.973 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.973 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.973 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.973 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.973 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.973 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.973 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: ]] 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.974 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.232 nvme0n1 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.232 nvme0n1 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.232 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: ]] 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.489 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.490 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.490 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.490 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.490 nvme0n1 00:24:43.490 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.490 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.490 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.490 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.490 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.490 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.490 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.490 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.490 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.490 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:43.747 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.748 12:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.748 nvme0n1 00:24:43.748 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.748 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.748 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.748 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.748 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.748 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.748 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.748 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.748 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.748 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.006 nvme0n1 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.006 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: ]] 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.007 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.264 nvme0n1 00:24:44.264 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.264 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.264 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.264 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.264 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.264 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.264 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.264 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.264 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.264 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.264 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.264 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.264 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:44.264 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.264 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.265 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.523 nvme0n1 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: ]] 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.523 12:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.781 nvme0n1 00:24:44.781 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.781 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.781 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.781 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.781 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.781 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.039 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.308 nvme0n1 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.308 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.309 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.567 nvme0n1 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: ]] 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:45.567 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.568 12:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.826 nvme0n1 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:45.826 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.827 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.084 nvme0n1 00:24:46.084 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.084 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.084 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.084 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.084 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.084 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: ]] 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.342 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.600 nvme0n1 00:24:46.600 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.600 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.600 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.600 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.600 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.857 12:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.857 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.857 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.857 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.858 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.858 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.858 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.858 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.858 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.858 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.858 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.858 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.858 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.858 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.858 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.858 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.422 nvme0n1 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:24:47.422 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.423 12:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.988 nvme0n1 00:24:47.988 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.988 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.988 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.988 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.988 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.988 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.988 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: ]] 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.989 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.247 nvme0n1 00:24:48.247 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.247 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.247 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.247 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.247 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:48.504 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.505 12:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.071 nvme0n1 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: ]] 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.071 12:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.003 nvme0n1 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:50.003 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.004 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.937 nvme0n1 00:24:50.937 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.937 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.937 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.937 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.937 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.937 12:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.937 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.869 nvme0n1 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: ]] 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:51.869 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.870 12:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.802 nvme0n1 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.802 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:52.803 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.803 12:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.368 nvme0n1 00:24:53.368 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.368 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.368 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.368 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.368 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.368 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: ]] 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.626 nvme0n1 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.626 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:53.885 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:53.886 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:53.886 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.886 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:53.886 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.886 12:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.886 nvme0n1 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.886 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.144 nvme0n1 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: ]] 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.144 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.145 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.145 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.145 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.145 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:54.145 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.145 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.403 nvme0n1 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.403 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.661 nvme0n1 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: ]] 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.661 12:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.919 nvme0n1 00:24:54.919 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.919 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.919 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.919 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.919 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.919 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.919 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.919 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.920 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.179 nvme0n1 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:55.179 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.180 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.438 nvme0n1 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: ]] 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.438 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.439 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:55.439 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.439 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.697 nvme0n1 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.697 12:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.956 nvme0n1 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: ]] 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.956 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.957 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.957 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.957 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.957 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.957 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.957 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.957 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.957 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:55.957 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.957 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.215 nvme0n1 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.215 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.473 nvme0n1 00:24:56.473 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.473 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.473 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.473 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.473 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.473 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.731 12:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.989 nvme0n1 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: ]] 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.989 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.247 nvme0n1 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.247 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.248 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.248 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.248 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.248 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.248 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:57.248 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.248 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.506 nvme0n1 00:24:57.506 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.506 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.506 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.506 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.506 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.506 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.506 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.506 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.506 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.506 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: ]] 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:57.763 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.764 12:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.330 nvme0n1 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.330 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.331 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:58.331 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.331 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.896 nvme0n1 00:24:58.896 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.896 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.896 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.896 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.896 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.896 12:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.896 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.897 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.897 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.897 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.897 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.897 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.897 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.897 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.897 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.897 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.897 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.897 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:58.897 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.897 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.462 nvme0n1 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: ]] 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.462 12:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.028 nvme0n1 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.028 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.029 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.029 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.029 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.029 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.029 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.029 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.029 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.595 nvme0n1 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: ]] 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.595 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.596 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.596 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.596 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.596 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.596 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.596 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.596 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.596 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.596 12:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.530 nvme0n1 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.530 12:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.464 nvme0n1 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.464 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.465 12:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.399 nvme0n1 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: ]] 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:03.399 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.400 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.400 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.400 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.400 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.400 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.400 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.400 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.400 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.400 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:03.400 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.400 12:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.333 nvme0n1 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.333 12:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.899 nvme0n1 00:25:04.899 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.899 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.899 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.899 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.899 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.899 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: ]] 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.158 nvme0n1 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.158 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.159 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.417 nvme0n1 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.417 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.418 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.418 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.418 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.418 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.418 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.418 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.418 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.418 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.418 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.418 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.418 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.418 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.418 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.677 nvme0n1 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: ]] 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.677 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.678 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.678 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.678 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.678 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.678 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.678 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.678 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:05.678 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.678 12:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.936 nvme0n1 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:05.936 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.937 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.195 nvme0n1 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: ]] 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.195 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.454 nvme0n1 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.454 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.712 nvme0n1 00:25:06.712 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.712 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.712 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.712 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.712 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.712 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.712 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.713 12:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.971 nvme0n1 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: ]] 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.971 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.972 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.972 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.972 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.972 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.972 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.972 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:06.972 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.972 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.230 nvme0n1 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.230 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.231 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.231 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.231 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.231 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.231 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.231 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.231 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:07.231 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.231 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.491 nvme0n1 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: ]] 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.491 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.749 nvme0n1 00:25:07.749 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.750 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.750 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.750 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.750 12:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.750 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.008 nvme0n1 00:25:08.008 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.008 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.008 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.008 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.008 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.008 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.266 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.524 nvme0n1 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: ]] 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.524 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.525 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.525 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.525 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.525 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.525 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.525 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.525 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.525 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.525 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.525 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.525 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:08.525 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.525 12:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.783 nvme0n1 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.783 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.784 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.784 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.784 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.784 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.784 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:08.784 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.784 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.042 nvme0n1 00:25:09.042 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.042 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.042 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.042 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.042 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.042 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: ]] 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.300 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.301 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.301 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.301 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:09.301 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.301 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.867 nvme0n1 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.867 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.868 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.868 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.868 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.868 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.868 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.868 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:09.868 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.868 12:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.434 nvme0n1 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:10.434 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.435 12:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.693 nvme0n1 00:25:10.693 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.693 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.693 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.693 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.693 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.693 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: ]] 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.951 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.518 nvme0n1 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.518 12:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.775 nvme0n1 00:25:11.775 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.775 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.775 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.775 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.775 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.775 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODNiODI3MzRjZTYzMmY3OGE5NmMxMDdlNDY1YzhkZDIlx1zk: 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: ]] 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGIzODk3YjUxNDlhZjczNDlmODZhZTk0MTc5NjI1NjMxZTVlMzI4MjMyOGFkMzhlMmYzMDUxNTFhZjdhOTJlZcaFATs=: 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.033 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.034 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.034 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.034 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.034 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.034 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:12.034 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.034 12:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.969 nvme0n1 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.969 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.903 nvme0n1 00:25:13.903 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.903 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.903 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.903 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.903 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.903 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.903 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.903 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.903 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.903 12:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:13.903 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.904 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.838 nvme0n1 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTQyZWI2OWFkYTU3NzA4ZTZjNjYwZjc0YjRjNjFhYzJkNjFmOTkxZWM1YjFiM2RmrT6DJQ==: 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: ]] 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY4NTNiOWNlYmQ3NGM5ODBhYTE0OWZmNjJiODY2NWS69hBj: 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.838 12:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.773 nvme0n1 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkxMmYxMTdkNGRkYTY2MTRlNTFmZTliN2M0Yzk2MmJiYmYxYWIyNDFmZmViNTVjMWU0YzAzZGU5MDQyOTA0Ne2QDTo=: 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.773 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.774 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.774 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.774 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.774 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:15.774 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.774 12:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.339 nvme0n1 00:25:16.339 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.339 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.339 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.339 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.339 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.339 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.597 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.598 request: 00:25:16.598 { 00:25:16.598 "name": "nvme0", 00:25:16.598 "trtype": "tcp", 00:25:16.598 "traddr": "10.0.0.1", 00:25:16.598 "adrfam": "ipv4", 00:25:16.598 "trsvcid": "4420", 00:25:16.598 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:16.598 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:16.598 "prchk_reftag": false, 00:25:16.598 "prchk_guard": false, 00:25:16.598 "hdgst": false, 00:25:16.598 "ddgst": false, 00:25:16.598 "allow_unrecognized_csi": false, 00:25:16.598 "method": "bdev_nvme_attach_controller", 00:25:16.598 "req_id": 1 00:25:16.598 } 00:25:16.598 Got JSON-RPC error response 00:25:16.598 response: 00:25:16.598 { 00:25:16.598 "code": -5, 00:25:16.598 "message": "Input/output error" 00:25:16.598 } 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.598 request: 00:25:16.598 { 00:25:16.598 "name": "nvme0", 00:25:16.598 "trtype": "tcp", 00:25:16.598 "traddr": "10.0.0.1", 00:25:16.598 "adrfam": "ipv4", 00:25:16.598 "trsvcid": "4420", 00:25:16.598 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:16.598 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:16.598 "prchk_reftag": false, 00:25:16.598 "prchk_guard": false, 00:25:16.598 "hdgst": false, 00:25:16.598 "ddgst": false, 00:25:16.598 "dhchap_key": "key2", 00:25:16.598 "allow_unrecognized_csi": false, 00:25:16.598 "method": "bdev_nvme_attach_controller", 00:25:16.598 "req_id": 1 00:25:16.598 } 00:25:16.598 Got JSON-RPC error response 00:25:16.598 response: 00:25:16.598 { 00:25:16.598 "code": -5, 00:25:16.598 "message": "Input/output error" 00:25:16.598 } 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.598 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.856 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:16.856 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:16.856 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.856 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.856 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.856 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.857 12:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.857 request: 00:25:16.857 { 00:25:16.857 "name": "nvme0", 00:25:16.857 "trtype": "tcp", 00:25:16.857 "traddr": "10.0.0.1", 00:25:16.857 "adrfam": "ipv4", 00:25:16.857 "trsvcid": "4420", 00:25:16.857 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:16.857 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:16.857 "prchk_reftag": false, 00:25:16.857 "prchk_guard": false, 00:25:16.857 "hdgst": false, 00:25:16.857 "ddgst": false, 00:25:16.857 "dhchap_key": "key1", 00:25:16.857 "dhchap_ctrlr_key": "ckey2", 00:25:16.857 "allow_unrecognized_csi": false, 00:25:16.857 "method": "bdev_nvme_attach_controller", 00:25:16.857 "req_id": 1 00:25:16.857 } 00:25:16.857 Got JSON-RPC error response 00:25:16.857 response: 00:25:16.857 { 00:25:16.857 "code": -5, 00:25:16.857 "message": "Input/output error" 00:25:16.857 } 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.857 nvme0n1 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.857 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.115 request: 00:25:17.115 { 00:25:17.115 "name": "nvme0", 00:25:17.115 "dhchap_key": "key1", 00:25:17.115 "dhchap_ctrlr_key": "ckey2", 00:25:17.115 "method": "bdev_nvme_set_keys", 00:25:17.115 "req_id": 1 00:25:17.115 } 00:25:17.115 Got JSON-RPC error response 00:25:17.115 response: 00:25:17.115 { 00:25:17.115 "code": -13, 00:25:17.115 "message": "Permission denied" 00:25:17.115 } 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:17.115 12:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:18.487 12:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:18.487 12:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.487 12:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.487 12:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.487 12:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.487 12:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:18.487 12:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjUxNjc0YWY1YjM4NWU4MzkyYjRiNmU2ZTkyNWY1OWJkOTJjZjM4OGM0MzZkZDBkjJ5Plw==: 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: ]] 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ0NmU1ZmJjYjc2MTI4MTlkZjczNWZmNTVlYTk0YjI4MDk0NDRhMDg3NzU1MjJiiaRWcw==: 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.420 nvme0n1 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQwNjRjNmIwM2I3YjA1MWJiYjY2MWU1ODQ4OTk3NDk++dAm: 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: ]] 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWRlZDcwZGRlYTc2NGUxMTI1ZWFmYWQxYzY2N2JjYzYJdCSW: 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.420 request: 00:25:19.420 { 00:25:19.420 "name": "nvme0", 00:25:19.420 "dhchap_key": "key2", 00:25:19.420 "dhchap_ctrlr_key": "ckey1", 00:25:19.420 "method": "bdev_nvme_set_keys", 00:25:19.420 "req_id": 1 00:25:19.420 } 00:25:19.420 Got JSON-RPC error response 00:25:19.420 response: 00:25:19.420 { 00:25:19.420 "code": -13, 00:25:19.420 "message": "Permission denied" 00:25:19.420 } 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.420 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.421 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.421 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:19.421 12:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:20.795 rmmod nvme_tcp 00:25:20.795 rmmod nvme_fabrics 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:20.795 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:20.796 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1115977 ']' 00:25:20.796 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1115977 00:25:20.796 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1115977 ']' 00:25:20.796 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1115977 00:25:20.796 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:20.796 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.796 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1115977 00:25:20.796 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:20.796 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:20.796 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1115977' 00:25:20.796 killing process with pid 1115977 00:25:20.796 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1115977 00:25:20.796 12:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1115977 00:25:20.796 12:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:20.796 12:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:20.796 12:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:20.796 12:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:20.796 12:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:20.796 12:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:20.796 12:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:20.796 12:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:20.796 12:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:20.796 12:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.796 12:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.796 12:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.329 12:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:23.329 12:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:23.329 12:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:23.329 12:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:23.329 12:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:23.329 12:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:23.329 12:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:23.330 12:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:23.330 12:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:23.330 12:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:23.330 12:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:23.330 12:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:23.330 12:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:24.264 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:24.264 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:24.264 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:24.264 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:24.264 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:24.264 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:24.264 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:24.264 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:24.264 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:24.264 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:24.264 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:24.264 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:24.264 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:24.264 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:24.264 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:24.264 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:25.206 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:25:25.463 12:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.4V9 /tmp/spdk.key-null.yEw /tmp/spdk.key-sha256.8Z6 /tmp/spdk.key-sha384.yWL /tmp/spdk.key-sha512.iBg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:25.463 12:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:26.397 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:26.397 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:26.397 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:26.397 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:26.397 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:26.397 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:26.397 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:26.397 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:26.397 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:26.397 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:26.397 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:26.397 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:26.397 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:26.397 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:26.397 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:26.397 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:26.397 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:26.655 00:25:26.655 real 0m50.851s 00:25:26.655 user 0m48.430s 00:25:26.655 sys 0m5.920s 00:25:26.655 12:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:26.655 12:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.655 ************************************ 00:25:26.655 END TEST nvmf_auth_host 00:25:26.655 ************************************ 00:25:26.655 12:47:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:26.655 12:47:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:26.655 12:47:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:26.655 12:47:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:26.655 12:47:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.655 ************************************ 00:25:26.655 START TEST nvmf_digest 00:25:26.655 ************************************ 00:25:26.655 12:47:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:26.655 * Looking for test storage... 00:25:26.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:26.655 12:47:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:26.655 12:47:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:25:26.655 12:47:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:26.914 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:26.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.915 --rc genhtml_branch_coverage=1 00:25:26.915 --rc genhtml_function_coverage=1 00:25:26.915 --rc genhtml_legend=1 00:25:26.915 --rc geninfo_all_blocks=1 00:25:26.915 --rc geninfo_unexecuted_blocks=1 00:25:26.915 00:25:26.915 ' 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:26.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.915 --rc genhtml_branch_coverage=1 00:25:26.915 --rc genhtml_function_coverage=1 00:25:26.915 --rc genhtml_legend=1 00:25:26.915 --rc geninfo_all_blocks=1 00:25:26.915 --rc geninfo_unexecuted_blocks=1 00:25:26.915 00:25:26.915 ' 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:26.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.915 --rc genhtml_branch_coverage=1 00:25:26.915 --rc genhtml_function_coverage=1 00:25:26.915 --rc genhtml_legend=1 00:25:26.915 --rc geninfo_all_blocks=1 00:25:26.915 --rc geninfo_unexecuted_blocks=1 00:25:26.915 00:25:26.915 ' 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:26.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.915 --rc genhtml_branch_coverage=1 00:25:26.915 --rc genhtml_function_coverage=1 00:25:26.915 --rc genhtml_legend=1 00:25:26.915 --rc geninfo_all_blocks=1 00:25:26.915 --rc geninfo_unexecuted_blocks=1 00:25:26.915 00:25:26.915 ' 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:26.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:26.915 12:47:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:28.817 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:28.817 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.817 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:28.818 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:28.818 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:28.818 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.076 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.076 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.076 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:29.076 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:29.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:25:29.076 00:25:29.076 --- 10.0.0.2 ping statistics --- 00:25:29.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.077 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:25:29.077 00:25:29.077 --- 10.0.0.1 ping statistics --- 00:25:29.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.077 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:29.077 ************************************ 00:25:29.077 START TEST nvmf_digest_clean 00:25:29.077 ************************************ 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1125589 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1125589 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1125589 ']' 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:29.077 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:29.077 [2024-11-15 12:47:09.294475] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:25:29.077 [2024-11-15 12:47:09.294567] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.077 [2024-11-15 12:47:09.366506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.335 [2024-11-15 12:47:09.426983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.335 [2024-11-15 12:47:09.427045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.335 [2024-11-15 12:47:09.427073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.335 [2024-11-15 12:47:09.427084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.335 [2024-11-15 12:47:09.427094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.335 [2024-11-15 12:47:09.427680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.335 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.335 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:29.335 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:29.335 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:29.335 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:29.335 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.335 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:29.335 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:29.335 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:29.335 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.335 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:29.335 null0 00:25:29.335 [2024-11-15 12:47:09.666281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.593 [2024-11-15 12:47:09.690488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1125616 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1125616 /var/tmp/bperf.sock 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1125616 ']' 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:29.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:29.593 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:29.593 [2024-11-15 12:47:09.741657] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:25:29.593 [2024-11-15 12:47:09.741803] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125616 ] 00:25:29.593 [2024-11-15 12:47:09.808211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.593 [2024-11-15 12:47:09.867332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.851 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.851 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:29.851 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:29.851 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:29.851 12:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:30.109 12:47:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:30.109 12:47:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:30.674 nvme0n1 00:25:30.674 12:47:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:30.674 12:47:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:30.674 Running I/O for 2 seconds... 00:25:32.980 18485.00 IOPS, 72.21 MiB/s [2024-11-15T11:47:13.324Z] 18542.00 IOPS, 72.43 MiB/s 00:25:32.981 Latency(us) 00:25:32.981 [2024-11-15T11:47:13.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.981 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:32.981 nvme0n1 : 2.00 18560.74 72.50 0.00 0.00 6889.48 3325.35 15243.19 00:25:32.981 [2024-11-15T11:47:13.325Z] =================================================================================================================== 00:25:32.981 [2024-11-15T11:47:13.325Z] Total : 18560.74 72.50 0.00 0.00 6889.48 3325.35 15243.19 00:25:32.981 { 00:25:32.981 "results": [ 00:25:32.981 { 00:25:32.981 "job": "nvme0n1", 00:25:32.981 "core_mask": "0x2", 00:25:32.981 "workload": "randread", 00:25:32.981 "status": "finished", 00:25:32.981 "queue_depth": 128, 00:25:32.981 "io_size": 4096, 00:25:32.981 "runtime": 2.004877, 00:25:32.981 "iops": 18560.739636396647, 00:25:32.981 "mibps": 72.5028892046744, 00:25:32.981 "io_failed": 0, 00:25:32.981 "io_timeout": 0, 00:25:32.981 "avg_latency_us": 6889.475554918563, 00:25:32.981 "min_latency_us": 3325.345185185185, 00:25:32.981 "max_latency_us": 15243.188148148149 00:25:32.981 } 00:25:32.981 ], 00:25:32.981 "core_count": 1 00:25:32.981 } 00:25:32.981 12:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:32.981 12:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:32.981 12:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:32.981 12:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:32.981 | select(.opcode=="crc32c") 00:25:32.981 | "\(.module_name) \(.executed)"' 00:25:32.981 12:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:32.981 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:32.981 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:32.981 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:32.981 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:32.981 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1125616 00:25:32.981 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1125616 ']' 00:25:32.981 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1125616 00:25:32.981 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:32.981 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.981 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1125616 00:25:32.981 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:32.981 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:32.981 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1125616' 00:25:32.981 killing process with pid 1125616 00:25:32.981 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1125616 00:25:32.981 Received shutdown signal, test time was about 2.000000 seconds 00:25:32.981 00:25:32.981 Latency(us) 00:25:32.981 [2024-11-15T11:47:13.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.981 [2024-11-15T11:47:13.325Z] =================================================================================================================== 00:25:32.981 [2024-11-15T11:47:13.325Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:32.981 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1125616 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1126062 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1126062 /var/tmp/bperf.sock 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1126062 ']' 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:33.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.239 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.239 [2024-11-15 12:47:13.500652] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:25:33.239 [2024-11-15 12:47:13.500749] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126062 ] 00:25:33.239 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:33.239 Zero copy mechanism will not be used. 00:25:33.239 [2024-11-15 12:47:13.564262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.497 [2024-11-15 12:47:13.623349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.497 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.497 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:33.497 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:33.497 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:33.497 12:47:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:34.063 12:47:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:34.063 12:47:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:34.321 nvme0n1 00:25:34.321 12:47:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:34.321 12:47:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:34.321 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:34.321 Zero copy mechanism will not be used. 00:25:34.321 Running I/O for 2 seconds... 00:25:36.270 6001.00 IOPS, 750.12 MiB/s [2024-11-15T11:47:16.614Z] 5961.00 IOPS, 745.12 MiB/s 00:25:36.270 Latency(us) 00:25:36.270 [2024-11-15T11:47:16.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.270 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:36.270 nvme0n1 : 2.00 5961.37 745.17 0.00 0.00 2679.87 658.39 5145.79 00:25:36.270 [2024-11-15T11:47:16.614Z] =================================================================================================================== 00:25:36.270 [2024-11-15T11:47:16.614Z] Total : 5961.37 745.17 0.00 0.00 2679.87 658.39 5145.79 00:25:36.270 { 00:25:36.270 "results": [ 00:25:36.270 { 00:25:36.270 "job": "nvme0n1", 00:25:36.270 "core_mask": "0x2", 00:25:36.270 "workload": "randread", 00:25:36.270 "status": "finished", 00:25:36.270 "queue_depth": 16, 00:25:36.270 "io_size": 131072, 00:25:36.270 "runtime": 2.002561, 00:25:36.270 "iops": 5961.366470234864, 00:25:36.270 "mibps": 745.170808779358, 00:25:36.270 "io_failed": 0, 00:25:36.270 "io_timeout": 0, 00:25:36.270 "avg_latency_us": 2679.8698162729656, 00:25:36.270 "min_latency_us": 658.3940740740741, 00:25:36.270 "max_latency_us": 5145.789629629629 00:25:36.270 } 00:25:36.270 ], 00:25:36.270 "core_count": 1 00:25:36.270 } 00:25:36.533 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:36.533 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:36.533 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:36.533 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:36.533 | select(.opcode=="crc32c") 00:25:36.533 | "\(.module_name) \(.executed)"' 00:25:36.533 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:36.791 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:36.792 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:36.792 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:36.792 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:36.792 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1126062 00:25:36.792 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1126062 ']' 00:25:36.792 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1126062 00:25:36.792 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:36.792 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:36.792 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1126062 00:25:36.792 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:36.792 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:36.792 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1126062' 00:25:36.792 killing process with pid 1126062 00:25:36.792 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1126062 00:25:36.792 Received shutdown signal, test time was about 2.000000 seconds 00:25:36.792 00:25:36.792 Latency(us) 00:25:36.792 [2024-11-15T11:47:17.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.792 [2024-11-15T11:47:17.136Z] =================================================================================================================== 00:25:36.792 [2024-11-15T11:47:17.136Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:36.792 12:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1126062 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1126545 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1126545 /var/tmp/bperf.sock 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1126545 ']' 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:37.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.050 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:37.050 [2024-11-15 12:47:17.215553] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:25:37.050 [2024-11-15 12:47:17.215637] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126545 ] 00:25:37.050 [2024-11-15 12:47:17.281144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.050 [2024-11-15 12:47:17.336546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.308 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.308 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:37.308 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:37.308 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:37.308 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:37.566 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:37.566 12:47:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:37.824 nvme0n1 00:25:37.824 12:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:37.824 12:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:38.081 Running I/O for 2 seconds... 00:25:39.946 19212.00 IOPS, 75.05 MiB/s [2024-11-15T11:47:20.290Z] 18782.00 IOPS, 73.37 MiB/s 00:25:39.946 Latency(us) 00:25:39.946 [2024-11-15T11:47:20.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.946 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:39.946 nvme0n1 : 2.01 18781.92 73.37 0.00 0.00 6799.23 2742.80 11408.12 00:25:39.946 [2024-11-15T11:47:20.290Z] =================================================================================================================== 00:25:39.946 [2024-11-15T11:47:20.290Z] Total : 18781.92 73.37 0.00 0.00 6799.23 2742.80 11408.12 00:25:39.946 { 00:25:39.946 "results": [ 00:25:39.946 { 00:25:39.946 "job": "nvme0n1", 00:25:39.946 "core_mask": "0x2", 00:25:39.946 "workload": "randwrite", 00:25:39.946 "status": "finished", 00:25:39.946 "queue_depth": 128, 00:25:39.946 "io_size": 4096, 00:25:39.946 "runtime": 2.008527, 00:25:39.946 "iops": 18781.923270137766, 00:25:39.946 "mibps": 73.36688777397565, 00:25:39.946 "io_failed": 0, 00:25:39.946 "io_timeout": 0, 00:25:39.946 "avg_latency_us": 6799.229384574906, 00:25:39.946 "min_latency_us": 2742.8029629629627, 00:25:39.946 "max_latency_us": 11408.118518518519 00:25:39.946 } 00:25:39.946 ], 00:25:39.946 "core_count": 1 00:25:39.946 } 00:25:39.946 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:39.946 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:39.946 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:39.946 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:39.946 | select(.opcode=="crc32c") 00:25:39.946 | "\(.module_name) \(.executed)"' 00:25:39.946 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1126545 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1126545 ']' 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1126545 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1126545 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1126545' 00:25:40.512 killing process with pid 1126545 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1126545 00:25:40.512 Received shutdown signal, test time was about 2.000000 seconds 00:25:40.512 00:25:40.512 Latency(us) 00:25:40.512 [2024-11-15T11:47:20.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.512 [2024-11-15T11:47:20.856Z] =================================================================================================================== 00:25:40.512 [2024-11-15T11:47:20.856Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1126545 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:40.512 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1126955 00:25:40.513 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:40.513 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1126955 /var/tmp/bperf.sock 00:25:40.513 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1126955 ']' 00:25:40.513 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:40.513 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:40.513 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:40.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:40.513 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:40.513 12:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:40.771 [2024-11-15 12:47:20.888950] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:25:40.771 [2024-11-15 12:47:20.889044] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126955 ] 00:25:40.771 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:40.771 Zero copy mechanism will not be used. 00:25:40.771 [2024-11-15 12:47:20.954073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.771 [2024-11-15 12:47:21.008387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.771 12:47:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.771 12:47:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:40.771 12:47:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:40.771 12:47:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:40.771 12:47:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:41.338 12:47:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:41.338 12:47:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:41.596 nvme0n1 00:25:41.596 12:47:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:41.596 12:47:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:41.853 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:41.853 Zero copy mechanism will not be used. 00:25:41.853 Running I/O for 2 seconds... 00:25:43.718 5915.00 IOPS, 739.38 MiB/s [2024-11-15T11:47:24.319Z] 5936.50 IOPS, 742.06 MiB/s 00:25:43.975 Latency(us) 00:25:43.975 [2024-11-15T11:47:24.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.975 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:43.975 nvme0n1 : 2.00 5933.04 741.63 0.00 0.00 2689.93 1953.94 4708.88 00:25:43.975 [2024-11-15T11:47:24.319Z] =================================================================================================================== 00:25:43.975 [2024-11-15T11:47:24.319Z] Total : 5933.04 741.63 0.00 0.00 2689.93 1953.94 4708.88 00:25:43.975 { 00:25:43.975 "results": [ 00:25:43.975 { 00:25:43.975 "job": "nvme0n1", 00:25:43.975 "core_mask": "0x2", 00:25:43.975 "workload": "randwrite", 00:25:43.975 "status": "finished", 00:25:43.975 "queue_depth": 16, 00:25:43.975 "io_size": 131072, 00:25:43.975 "runtime": 2.00437, 00:25:43.975 "iops": 5933.0363156503045, 00:25:43.975 "mibps": 741.6295394562881, 00:25:43.975 "io_failed": 0, 00:25:43.975 "io_timeout": 0, 00:25:43.975 "avg_latency_us": 2689.930675835607, 00:25:43.975 "min_latency_us": 1953.9437037037037, 00:25:43.975 "max_latency_us": 4708.882962962963 00:25:43.975 } 00:25:43.975 ], 00:25:43.975 "core_count": 1 00:25:43.975 } 00:25:43.975 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:43.975 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:43.975 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:43.975 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:43.975 | select(.opcode=="crc32c") 00:25:43.975 | "\(.module_name) \(.executed)"' 00:25:43.975 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:44.233 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:44.233 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:44.233 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:44.233 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:44.233 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1126955 00:25:44.233 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1126955 ']' 00:25:44.233 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1126955 00:25:44.233 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:44.233 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.233 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1126955 00:25:44.233 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:44.233 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:44.233 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1126955' 00:25:44.233 killing process with pid 1126955 00:25:44.233 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1126955 00:25:44.233 Received shutdown signal, test time was about 2.000000 seconds 00:25:44.233 00:25:44.233 Latency(us) 00:25:44.233 [2024-11-15T11:47:24.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.233 [2024-11-15T11:47:24.577Z] =================================================================================================================== 00:25:44.233 [2024-11-15T11:47:24.577Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:44.233 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1126955 00:25:44.491 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1125589 00:25:44.491 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1125589 ']' 00:25:44.491 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1125589 00:25:44.491 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:44.491 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.491 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1125589 00:25:44.491 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:44.491 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:44.491 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1125589' 00:25:44.491 killing process with pid 1125589 00:25:44.491 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1125589 00:25:44.491 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1125589 00:25:44.749 00:25:44.749 real 0m15.622s 00:25:44.749 user 0m31.536s 00:25:44.749 sys 0m4.158s 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:44.749 ************************************ 00:25:44.749 END TEST nvmf_digest_clean 00:25:44.749 ************************************ 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:44.749 ************************************ 00:25:44.749 START TEST nvmf_digest_error 00:25:44.749 ************************************ 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1127489 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1127489 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1127489 ']' 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:44.749 12:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:44.749 [2024-11-15 12:47:24.970923] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:25:44.749 [2024-11-15 12:47:24.971039] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.749 [2024-11-15 12:47:25.041256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.033 [2024-11-15 12:47:25.098118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.033 [2024-11-15 12:47:25.098165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.033 [2024-11-15 12:47:25.098193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.033 [2024-11-15 12:47:25.098204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.033 [2024-11-15 12:47:25.098214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.033 [2024-11-15 12:47:25.098780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.033 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.033 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:45.033 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:45.033 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:45.033 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:45.033 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.033 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:45.033 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.033 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:45.033 [2024-11-15 12:47:25.219474] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:45.033 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.033 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:45.033 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:45.033 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.033 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:45.033 null0 00:25:45.033 [2024-11-15 12:47:25.341126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.361 [2024-11-15 12:47:25.365322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.361 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.361 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:45.361 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:45.361 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:45.361 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:45.361 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:45.361 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1127535 00:25:45.361 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1127535 /var/tmp/bperf.sock 00:25:45.361 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1127535 ']' 00:25:45.361 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:45.361 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:45.361 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.361 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:45.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:45.362 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.362 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:45.362 [2024-11-15 12:47:25.418092] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:25:45.362 [2024-11-15 12:47:25.418167] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127535 ] 00:25:45.362 [2024-11-15 12:47:25.484462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.362 [2024-11-15 12:47:25.543296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.362 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.362 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:45.362 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:45.362 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:45.679 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:45.679 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.679 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:45.679 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.679 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.679 12:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.994 nvme0n1 00:25:45.994 12:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:45.994 12:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.994 12:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:45.994 12:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.994 12:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:45.994 12:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:46.279 Running I/O for 2 seconds... 00:25:46.279 [2024-11-15 12:47:26.431439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.279 [2024-11-15 12:47:26.431490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.279 [2024-11-15 12:47:26.431517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.279 [2024-11-15 12:47:26.446787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.279 [2024-11-15 12:47:26.446836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.279 [2024-11-15 12:47:26.446864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.279 [2024-11-15 12:47:26.460554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.279 [2024-11-15 12:47:26.460586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.279 [2024-11-15 12:47:26.460628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.279 [2024-11-15 12:47:26.474116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.279 [2024-11-15 12:47:26.474149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.279 [2024-11-15 12:47:26.474192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.279 [2024-11-15 12:47:26.488379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.279 [2024-11-15 12:47:26.488412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.279 [2024-11-15 12:47:26.488438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.279 [2024-11-15 12:47:26.499262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.279 [2024-11-15 12:47:26.499292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.279 [2024-11-15 12:47:26.499317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.279 [2024-11-15 12:47:26.513888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.279 [2024-11-15 12:47:26.513918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.279 [2024-11-15 12:47:26.513944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.279 [2024-11-15 12:47:26.528982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.279 [2024-11-15 12:47:26.529014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.279 [2024-11-15 12:47:26.529054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.279 [2024-11-15 12:47:26.541122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.279 [2024-11-15 12:47:26.541151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.279 [2024-11-15 12:47:26.541176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.279 [2024-11-15 12:47:26.554675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.279 [2024-11-15 12:47:26.554704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.279 [2024-11-15 12:47:26.554752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.279 [2024-11-15 12:47:26.568645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.279 [2024-11-15 12:47:26.568674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.279 [2024-11-15 12:47:26.568698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.279 [2024-11-15 12:47:26.583409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.279 [2024-11-15 12:47:26.583439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.279 [2024-11-15 12:47:26.583486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.280 [2024-11-15 12:47:26.597170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.280 [2024-11-15 12:47:26.597201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.280 [2024-11-15 12:47:26.597228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.280 [2024-11-15 12:47:26.611048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.280 [2024-11-15 12:47:26.611080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.280 [2024-11-15 12:47:26.611106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.538 [2024-11-15 12:47:26.622780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.538 [2024-11-15 12:47:26.622812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.538 [2024-11-15 12:47:26.622838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.538 [2024-11-15 12:47:26.638316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.538 [2024-11-15 12:47:26.638346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.538 [2024-11-15 12:47:26.638369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.538 [2024-11-15 12:47:26.652880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.538 [2024-11-15 12:47:26.652911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.538 [2024-11-15 12:47:26.652937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.538 [2024-11-15 12:47:26.664859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.538 [2024-11-15 12:47:26.664888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.538 [2024-11-15 12:47:26.664913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.538 [2024-11-15 12:47:26.679385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.538 [2024-11-15 12:47:26.679414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.538 [2024-11-15 12:47:26.679438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.538 [2024-11-15 12:47:26.694774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.538 [2024-11-15 12:47:26.694804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.538 [2024-11-15 12:47:26.694828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.538 [2024-11-15 12:47:26.709860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.538 [2024-11-15 12:47:26.709911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.538 [2024-11-15 12:47:26.709937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.538 [2024-11-15 12:47:26.720803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.538 [2024-11-15 12:47:26.720833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.538 [2024-11-15 12:47:26.720856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.538 [2024-11-15 12:47:26.735868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.538 [2024-11-15 12:47:26.735914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.538 [2024-11-15 12:47:26.735941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.538 [2024-11-15 12:47:26.747146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.538 [2024-11-15 12:47:26.747175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.538 [2024-11-15 12:47:26.747199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.538 [2024-11-15 12:47:26.761593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.538 [2024-11-15 12:47:26.761625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.538 [2024-11-15 12:47:26.761668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.538 [2024-11-15 12:47:26.776347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.538 [2024-11-15 12:47:26.776380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.538 [2024-11-15 12:47:26.776408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.538 [2024-11-15 12:47:26.789047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.538 [2024-11-15 12:47:26.789080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.538 [2024-11-15 12:47:26.789107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.538 [2024-11-15 12:47:26.805341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.538 [2024-11-15 12:47:26.805391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.538 [2024-11-15 12:47:26.805419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.538 [2024-11-15 12:47:26.817390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.538 [2024-11-15 12:47:26.817437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.538 [2024-11-15 12:47:26.817462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.539 [2024-11-15 12:47:26.828759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.539 [2024-11-15 12:47:26.828789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.539 [2024-11-15 12:47:26.828830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.539 [2024-11-15 12:47:26.843776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.539 [2024-11-15 12:47:26.843808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.539 [2024-11-15 12:47:26.843835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.539 [2024-11-15 12:47:26.853996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.539 [2024-11-15 12:47:26.854029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.539 [2024-11-15 12:47:26.854056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.539 [2024-11-15 12:47:26.869606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.539 [2024-11-15 12:47:26.869654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.539 [2024-11-15 12:47:26.869681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.797 [2024-11-15 12:47:26.884465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.797 [2024-11-15 12:47:26.884498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.797 [2024-11-15 12:47:26.884527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.797 [2024-11-15 12:47:26.896061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.797 [2024-11-15 12:47:26.896105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.797 [2024-11-15 12:47:26.896130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.797 [2024-11-15 12:47:26.909553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.797 [2024-11-15 12:47:26.909585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.797 [2024-11-15 12:47:26.909610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.797 [2024-11-15 12:47:26.921980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.797 [2024-11-15 12:47:26.922012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.797 [2024-11-15 12:47:26.922038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.797 [2024-11-15 12:47:26.935485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.797 [2024-11-15 12:47:26.935522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.797 [2024-11-15 12:47:26.935548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.797 [2024-11-15 12:47:26.949547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.797 [2024-11-15 12:47:26.949578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.797 [2024-11-15 12:47:26.949603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.797 [2024-11-15 12:47:26.965435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.797 [2024-11-15 12:47:26.965465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.797 [2024-11-15 12:47:26.965491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.798 [2024-11-15 12:47:26.980651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.798 [2024-11-15 12:47:26.980684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.798 [2024-11-15 12:47:26.980711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.798 [2024-11-15 12:47:26.992237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.798 [2024-11-15 12:47:26.992267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.798 [2024-11-15 12:47:26.992306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.798 [2024-11-15 12:47:27.007740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.798 [2024-11-15 12:47:27.007773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.798 [2024-11-15 12:47:27.007799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.798 [2024-11-15 12:47:27.020985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.798 [2024-11-15 12:47:27.021017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.798 [2024-11-15 12:47:27.021044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.798 [2024-11-15 12:47:27.033398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.798 [2024-11-15 12:47:27.033428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.798 [2024-11-15 12:47:27.033453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.798 [2024-11-15 12:47:27.051013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.798 [2024-11-15 12:47:27.051047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.798 [2024-11-15 12:47:27.051075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.798 [2024-11-15 12:47:27.061949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.798 [2024-11-15 12:47:27.061982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.798 [2024-11-15 12:47:27.062014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.798 [2024-11-15 12:47:27.077666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.798 [2024-11-15 12:47:27.077712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.798 [2024-11-15 12:47:27.077749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.798 [2024-11-15 12:47:27.094312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.798 [2024-11-15 12:47:27.094344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.798 [2024-11-15 12:47:27.094369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.798 [2024-11-15 12:47:27.108213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.798 [2024-11-15 12:47:27.108245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.798 [2024-11-15 12:47:27.108271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.798 [2024-11-15 12:47:27.119434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.798 [2024-11-15 12:47:27.119464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.798 [2024-11-15 12:47:27.119488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.798 [2024-11-15 12:47:27.134213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:46.798 [2024-11-15 12:47:27.134259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.798 [2024-11-15 12:47:27.134300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.149297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.149329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.149355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.161130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.161175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.161200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.175670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.175703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.175746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.186308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.186338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.186363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.202075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.202106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.202131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.219387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.219417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.219442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.233242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.233273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.233298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.246562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.246592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.246617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.258606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.258653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.258680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.272202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.272235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.272262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.284456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.284504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.284531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.299857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.299896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.299922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.315128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.315158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.315182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.331382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.331412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.331436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.347367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.347400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.347427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.361304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.361337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.361365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.377462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.377494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.377522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.056 [2024-11-15 12:47:27.392973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.056 [2024-11-15 12:47:27.393006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.056 [2024-11-15 12:47:27.393034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.315 [2024-11-15 12:47:27.404407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.315 [2024-11-15 12:47:27.404437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.315 [2024-11-15 12:47:27.404463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.315 18255.00 IOPS, 71.31 MiB/s [2024-11-15T11:47:27.659Z] [2024-11-15 12:47:27.421363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.315 [2024-11-15 12:47:27.421398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.315 [2024-11-15 12:47:27.421433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.315 [2024-11-15 12:47:27.438036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.315 [2024-11-15 12:47:27.438080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.315 [2024-11-15 12:47:27.438106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.315 [2024-11-15 12:47:27.453662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.315 [2024-11-15 12:47:27.453695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.315 [2024-11-15 12:47:27.453729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.315 [2024-11-15 12:47:27.464219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.315 [2024-11-15 12:47:27.464252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.315 [2024-11-15 12:47:27.464280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.315 [2024-11-15 12:47:27.480372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.315 [2024-11-15 12:47:27.480402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.315 [2024-11-15 12:47:27.480442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.315 [2024-11-15 12:47:27.491921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.315 [2024-11-15 12:47:27.491955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.315 [2024-11-15 12:47:27.491983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.315 [2024-11-15 12:47:27.509411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.315 [2024-11-15 12:47:27.509441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.315 [2024-11-15 12:47:27.509466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.315 [2024-11-15 12:47:27.524419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.315 [2024-11-15 12:47:27.524451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.315 [2024-11-15 12:47:27.524479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.315 [2024-11-15 12:47:27.535151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.315 [2024-11-15 12:47:27.535183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.315 [2024-11-15 12:47:27.535224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.315 [2024-11-15 12:47:27.550985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.315 [2024-11-15 12:47:27.551025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.315 [2024-11-15 12:47:27.551053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.315 [2024-11-15 12:47:27.566165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.315 [2024-11-15 12:47:27.566196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.315 [2024-11-15 12:47:27.566222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.315 [2024-11-15 12:47:27.580547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.315 [2024-11-15 12:47:27.580580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.315 [2024-11-15 12:47:27.580608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.315 [2024-11-15 12:47:27.592963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.315 [2024-11-15 12:47:27.592996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.315 [2024-11-15 12:47:27.593039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.315 [2024-11-15 12:47:27.604526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.316 [2024-11-15 12:47:27.604571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.316 [2024-11-15 12:47:27.604596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.316 [2024-11-15 12:47:27.617474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.316 [2024-11-15 12:47:27.617507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.316 [2024-11-15 12:47:27.617533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.316 [2024-11-15 12:47:27.634020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.316 [2024-11-15 12:47:27.634064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.316 [2024-11-15 12:47:27.634089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.316 [2024-11-15 12:47:27.648506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.316 [2024-11-15 12:47:27.648538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.316 [2024-11-15 12:47:27.648566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.660880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.660927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.660955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.674971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.675005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.675033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.688772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.688802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.688827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.702370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.702400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.702424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.716892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.716928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.716955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.731817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.731864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.731891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.743403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.743433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.743458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.756528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.756558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.756583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.770965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.770997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.771027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.785611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.785658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.785694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.799614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.799648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.799677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.814828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.814859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.814884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.830582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.830614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.830639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.845912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.845953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.845981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.858212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.858243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.858267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.872816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.872846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.872872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.885377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.885407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.885447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.899414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.899444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.899482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.575 [2024-11-15 12:47:27.913171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.575 [2024-11-15 12:47:27.913211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.575 [2024-11-15 12:47:27.913238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:27.924665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:27.924710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:27.924745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:27.938328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:27.938357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:27.938382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:27.951414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:27.951443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:27.951468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:27.964754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:27.964783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:27.964809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:27.977243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:27.977273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:27.977298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:27.993400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:27.993430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:27.993454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:28.008797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:28.008830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:28.008858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:28.023132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:28.023164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:28.023191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:28.034598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:28.034627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:28.034652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:28.047949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:28.047982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:28.048022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:28.060536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:28.060567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:28.060591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:28.072564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:28.072595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:28.072621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:28.086497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:28.086527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:28.086552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:28.100907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:28.100953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:28.100978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:28.116968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:28.117000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:28.117045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:28.128181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:28.128209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:28.128233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:28.141995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:28.142035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:28.142075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:28.155764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:28.155797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:28.155824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.835 [2024-11-15 12:47:28.166794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:47.835 [2024-11-15 12:47:28.166824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.835 [2024-11-15 12:47:28.166849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.094 [2024-11-15 12:47:28.181630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.094 [2024-11-15 12:47:28.181662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.094 [2024-11-15 12:47:28.181688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.094 [2024-11-15 12:47:28.195628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.094 [2024-11-15 12:47:28.195658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.094 [2024-11-15 12:47:28.195682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.094 [2024-11-15 12:47:28.208670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.094 [2024-11-15 12:47:28.208722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.094 [2024-11-15 12:47:28.208753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.094 [2024-11-15 12:47:28.222219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.094 [2024-11-15 12:47:28.222252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.094 [2024-11-15 12:47:28.222293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.094 [2024-11-15 12:47:28.234930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.094 [2024-11-15 12:47:28.234961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.094 [2024-11-15 12:47:28.234997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.094 [2024-11-15 12:47:28.248908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.094 [2024-11-15 12:47:28.248942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.094 [2024-11-15 12:47:28.248971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.094 [2024-11-15 12:47:28.261039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.094 [2024-11-15 12:47:28.261071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.094 [2024-11-15 12:47:28.261096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.094 [2024-11-15 12:47:28.275335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.094 [2024-11-15 12:47:28.275365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.094 [2024-11-15 12:47:28.275388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.094 [2024-11-15 12:47:28.289020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.094 [2024-11-15 12:47:28.289067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.094 [2024-11-15 12:47:28.289094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.094 [2024-11-15 12:47:28.300592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.094 [2024-11-15 12:47:28.300627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.094 [2024-11-15 12:47:28.300651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.094 [2024-11-15 12:47:28.315448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.094 [2024-11-15 12:47:28.315480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.094 [2024-11-15 12:47:28.315519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.094 [2024-11-15 12:47:28.330496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.094 [2024-11-15 12:47:28.330527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.094 [2024-11-15 12:47:28.330566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.094 [2024-11-15 12:47:28.344839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.095 [2024-11-15 12:47:28.344871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.095 [2024-11-15 12:47:28.344897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.095 [2024-11-15 12:47:28.356345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.095 [2024-11-15 12:47:28.356374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.095 [2024-11-15 12:47:28.356397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.095 [2024-11-15 12:47:28.369840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.095 [2024-11-15 12:47:28.369882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.095 [2024-11-15 12:47:28.369915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.095 [2024-11-15 12:47:28.384634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.095 [2024-11-15 12:47:28.384664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.095 [2024-11-15 12:47:28.384688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.095 [2024-11-15 12:47:28.398949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.095 [2024-11-15 12:47:28.398981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.095 [2024-11-15 12:47:28.399008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.095 [2024-11-15 12:47:28.411092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.095 [2024-11-15 12:47:28.411121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.095 [2024-11-15 12:47:28.411145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.095 18409.50 IOPS, 71.91 MiB/s [2024-11-15T11:47:28.439Z] [2024-11-15 12:47:28.426467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11d7bf0) 00:25:48.095 [2024-11-15 12:47:28.426497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.095 [2024-11-15 12:47:28.426521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.095 00:25:48.095 Latency(us) 00:25:48.095 [2024-11-15T11:47:28.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.095 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:48.095 nvme0n1 : 2.01 18417.27 71.94 0.00 0.00 6939.19 3616.62 21748.24 00:25:48.095 [2024-11-15T11:47:28.439Z] =================================================================================================================== 00:25:48.095 [2024-11-15T11:47:28.439Z] Total : 18417.27 71.94 0.00 0.00 6939.19 3616.62 21748.24 00:25:48.095 { 00:25:48.095 "results": [ 00:25:48.095 { 00:25:48.095 "job": "nvme0n1", 00:25:48.095 "core_mask": "0x2", 00:25:48.095 "workload": "randread", 00:25:48.095 "status": "finished", 00:25:48.095 "queue_depth": 128, 00:25:48.095 "io_size": 4096, 00:25:48.095 "runtime": 2.008658, 00:25:48.095 "iops": 18417.271631108928, 00:25:48.095 "mibps": 71.94246730901925, 00:25:48.095 "io_failed": 0, 00:25:48.095 "io_timeout": 0, 00:25:48.095 "avg_latency_us": 6939.189036220087, 00:25:48.095 "min_latency_us": 3616.6162962962962, 00:25:48.095 "max_latency_us": 21748.242962962962 00:25:48.095 } 00:25:48.095 ], 00:25:48.095 "core_count": 1 00:25:48.095 } 00:25:48.353 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:48.353 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:48.353 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:48.353 | .driver_specific 00:25:48.353 | .nvme_error 00:25:48.353 | .status_code 00:25:48.353 | .command_transient_transport_error' 00:25:48.353 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:48.611 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:25:48.611 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1127535 00:25:48.611 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1127535 ']' 00:25:48.611 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1127535 00:25:48.611 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:48.611 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.611 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1127535 00:25:48.611 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:48.611 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:48.611 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1127535' 00:25:48.611 killing process with pid 1127535 00:25:48.611 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1127535 00:25:48.611 Received shutdown signal, test time was about 2.000000 seconds 00:25:48.611 00:25:48.611 Latency(us) 00:25:48.611 [2024-11-15T11:47:28.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.611 [2024-11-15T11:47:28.955Z] =================================================================================================================== 00:25:48.611 [2024-11-15T11:47:28.955Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:48.611 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1127535 00:25:48.870 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:48.870 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:48.870 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:48.870 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:48.870 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:48.870 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1127958 00:25:48.870 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:48.870 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1127958 /var/tmp/bperf.sock 00:25:48.870 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1127958 ']' 00:25:48.870 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:48.870 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:48.870 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:48.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:48.870 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:48.870 12:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:48.870 [2024-11-15 12:47:29.026277] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:25:48.870 [2024-11-15 12:47:29.026365] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127958 ] 00:25:48.870 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:48.870 Zero copy mechanism will not be used. 00:25:48.870 [2024-11-15 12:47:29.092054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.870 [2024-11-15 12:47:29.150806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.128 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:49.128 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:49.128 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:49.128 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:49.386 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:49.386 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.386 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:49.386 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.386 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:49.386 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:49.644 nvme0n1 00:25:49.644 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:49.644 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.644 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:49.644 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.645 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:49.645 12:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:49.904 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:49.904 Zero copy mechanism will not be used. 00:25:49.904 Running I/O for 2 seconds... 00:25:49.904 [2024-11-15 12:47:30.043667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.904 [2024-11-15 12:47:30.043752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.904 [2024-11-15 12:47:30.043795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:49.904 [2024-11-15 12:47:30.049120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.904 [2024-11-15 12:47:30.049157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.904 [2024-11-15 12:47:30.049186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.904 [2024-11-15 12:47:30.054329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.904 [2024-11-15 12:47:30.054363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.904 [2024-11-15 12:47:30.054392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.904 [2024-11-15 12:47:30.059480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.904 [2024-11-15 12:47:30.059516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.904 [2024-11-15 12:47:30.059545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.904 [2024-11-15 12:47:30.063769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.904 [2024-11-15 12:47:30.063803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.904 [2024-11-15 12:47:30.063832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:49.904 [2024-11-15 12:47:30.068083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.904 [2024-11-15 12:47:30.068117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.904 [2024-11-15 12:47:30.068145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.904 [2024-11-15 12:47:30.072243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.904 [2024-11-15 12:47:30.072290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.904 [2024-11-15 12:47:30.072332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.904 [2024-11-15 12:47:30.077343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.904 [2024-11-15 12:47:30.077376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.904 [2024-11-15 12:47:30.077405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.904 [2024-11-15 12:47:30.081521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.904 [2024-11-15 12:47:30.081554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.904 [2024-11-15 12:47:30.081581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:49.904 [2024-11-15 12:47:30.086366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.904 [2024-11-15 12:47:30.086399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.904 [2024-11-15 12:47:30.086428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.904 [2024-11-15 12:47:30.090648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.904 [2024-11-15 12:47:30.090681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.904 [2024-11-15 12:47:30.090709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.904 [2024-11-15 12:47:30.094928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.904 [2024-11-15 12:47:30.094962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.904 [2024-11-15 12:47:30.094998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.904 [2024-11-15 12:47:30.099756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.904 [2024-11-15 12:47:30.099788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.904 [2024-11-15 12:47:30.099816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:49.904 [2024-11-15 12:47:30.105788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.904 [2024-11-15 12:47:30.105821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.904 [2024-11-15 12:47:30.105850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.904 [2024-11-15 12:47:30.112677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.904 [2024-11-15 12:47:30.112710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.904 [2024-11-15 12:47:30.112745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.904 [2024-11-15 12:47:30.118645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.118676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.118716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.125118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.125150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.125177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.129063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.129096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.129123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.134354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.134385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.134429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.139605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.139638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.139665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.143162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.143213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.143240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.148559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.148591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.148617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.152611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.152642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.152669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.157070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.157101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.157142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.162385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.162418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.162445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.166850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.166883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.166911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.171497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.171530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.171556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.175498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.175530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.175557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.181299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.181330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.181372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.185431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.185479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.185507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.189898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.189931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.189958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.195391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.195422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.195463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.200886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.200920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.200948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.207062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.207110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.207137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.214593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.214626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.214654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.221296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.221329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.221371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.227402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.227451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.227478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.231694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.905 [2024-11-15 12:47:30.231734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.905 [2024-11-15 12:47:30.231770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.905 [2024-11-15 12:47:30.236383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.906 [2024-11-15 12:47:30.236416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.906 [2024-11-15 12:47:30.236455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.906 [2024-11-15 12:47:30.243072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:49.906 [2024-11-15 12:47:30.243104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.906 [2024-11-15 12:47:30.243131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.165 [2024-11-15 12:47:30.249968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.165 [2024-11-15 12:47:30.250002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.165 [2024-11-15 12:47:30.250030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.165 [2024-11-15 12:47:30.255944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.165 [2024-11-15 12:47:30.255976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.165 [2024-11-15 12:47:30.256004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.165 [2024-11-15 12:47:30.261695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.165 [2024-11-15 12:47:30.261738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.165 [2024-11-15 12:47:30.261767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.165 [2024-11-15 12:47:30.266627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.165 [2024-11-15 12:47:30.266658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.165 [2024-11-15 12:47:30.266684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.165 [2024-11-15 12:47:30.271978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.165 [2024-11-15 12:47:30.272010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.165 [2024-11-15 12:47:30.272052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.165 [2024-11-15 12:47:30.276791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.165 [2024-11-15 12:47:30.276824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.165 [2024-11-15 12:47:30.276851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.165 [2024-11-15 12:47:30.280971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.165 [2024-11-15 12:47:30.281011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.165 [2024-11-15 12:47:30.281039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.165 [2024-11-15 12:47:30.286076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.165 [2024-11-15 12:47:30.286124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.286152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.291119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.291152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.291180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.295575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.295607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.295634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.299822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.299854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.299882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.304165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.304196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.304223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.309677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.309710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.309748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.313990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.314022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.314050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.320209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.320259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.320285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.326483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.326516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.326544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.333613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.333646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.333673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.341484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.341517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.341558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.348461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.348494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.348521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.354223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.354270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.354294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.359358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.359390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.359416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.364270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.364301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.364327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.368067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.368098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.368124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.372345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.372378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.372413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.377396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.377428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.377455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.382440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.382472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.382499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.386464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.386496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.386523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.391410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.391443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.391471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.395156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.395202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.395228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.399703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.399741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.399783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.405116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.405148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.405177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.410714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.410755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.410783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.417074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.417113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.417141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.422563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.166 [2024-11-15 12:47:30.422595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.166 [2024-11-15 12:47:30.422636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.166 [2024-11-15 12:47:30.428443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.167 [2024-11-15 12:47:30.428476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-15 12:47:30.428503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.167 [2024-11-15 12:47:30.434478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.167 [2024-11-15 12:47:30.434511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-15 12:47:30.434538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.167 [2024-11-15 12:47:30.438062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.167 [2024-11-15 12:47:30.438093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-15 12:47:30.438120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.167 [2024-11-15 12:47:30.444098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.167 [2024-11-15 12:47:30.444128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-15 12:47:30.444167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.167 [2024-11-15 12:47:30.449977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.167 [2024-11-15 12:47:30.450011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-15 12:47:30.450052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.167 [2024-11-15 12:47:30.455592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.167 [2024-11-15 12:47:30.455623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-15 12:47:30.455651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.167 [2024-11-15 12:47:30.461529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.167 [2024-11-15 12:47:30.461562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-15 12:47:30.461605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.167 [2024-11-15 12:47:30.467802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.167 [2024-11-15 12:47:30.467834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-15 12:47:30.467861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.167 [2024-11-15 12:47:30.473832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.167 [2024-11-15 12:47:30.473865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-15 12:47:30.473893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.167 [2024-11-15 12:47:30.479589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.167 [2024-11-15 12:47:30.479636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-15 12:47:30.479662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.167 [2024-11-15 12:47:30.486022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.167 [2024-11-15 12:47:30.486055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-15 12:47:30.486096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.167 [2024-11-15 12:47:30.492628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.167 [2024-11-15 12:47:30.492675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-15 12:47:30.492700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.167 [2024-11-15 12:47:30.499469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.167 [2024-11-15 12:47:30.499501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-15 12:47:30.499541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.167 [2024-11-15 12:47:30.506788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.167 [2024-11-15 12:47:30.506822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-15 12:47:30.506849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.426 [2024-11-15 12:47:30.512971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.426 [2024-11-15 12:47:30.513005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.426 [2024-11-15 12:47:30.513046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.426 [2024-11-15 12:47:30.519141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.426 [2024-11-15 12:47:30.519175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.426 [2024-11-15 12:47:30.519216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.426 [2024-11-15 12:47:30.525121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.426 [2024-11-15 12:47:30.525153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.426 [2024-11-15 12:47:30.525181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.426 [2024-11-15 12:47:30.530959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.426 [2024-11-15 12:47:30.530992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.426 [2024-11-15 12:47:30.531020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.426 [2024-11-15 12:47:30.536138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.426 [2024-11-15 12:47:30.536172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.426 [2024-11-15 12:47:30.536199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.426 [2024-11-15 12:47:30.540469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.426 [2024-11-15 12:47:30.540503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.426 [2024-11-15 12:47:30.540530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.426 [2024-11-15 12:47:30.545508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.426 [2024-11-15 12:47:30.545541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.426 [2024-11-15 12:47:30.545569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.426 [2024-11-15 12:47:30.550179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.426 [2024-11-15 12:47:30.550212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.426 [2024-11-15 12:47:30.550240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.426 [2024-11-15 12:47:30.555876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.426 [2024-11-15 12:47:30.555909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.426 [2024-11-15 12:47:30.555936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.426 [2024-11-15 12:47:30.559939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.426 [2024-11-15 12:47:30.559972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.560001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.566625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.566670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.566694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.573840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.573874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.573901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.581957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.581990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.582031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.590447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.590478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.590503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.598108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.598154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.598194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.605834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.605868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.605896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.614167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.614216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.614242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.622674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.622708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.622747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.630077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.630123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.630155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.635584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.635615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.635640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.640525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.640555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.640579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.645422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.645451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.645476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.650300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.650347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.650374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.655714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.655755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.655784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.659852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.659885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.659911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.664375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.664407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.664433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.670541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.670574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.670603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.676866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.676904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.676933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.683693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.683734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.683763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.691404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.691436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.691463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.699390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.699423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.699450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.707769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.707802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.707844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.715445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.715477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.427 [2024-11-15 12:47:30.715504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.427 [2024-11-15 12:47:30.722975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.427 [2024-11-15 12:47:30.723008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.428 [2024-11-15 12:47:30.723036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.428 [2024-11-15 12:47:30.730855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.428 [2024-11-15 12:47:30.730888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.428 [2024-11-15 12:47:30.730917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.428 [2024-11-15 12:47:30.738600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.428 [2024-11-15 12:47:30.738648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.428 [2024-11-15 12:47:30.738676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.428 [2024-11-15 12:47:30.746545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.428 [2024-11-15 12:47:30.746579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.428 [2024-11-15 12:47:30.746607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.428 [2024-11-15 12:47:30.754506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.428 [2024-11-15 12:47:30.754540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.428 [2024-11-15 12:47:30.754568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.428 [2024-11-15 12:47:30.762532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.428 [2024-11-15 12:47:30.762566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.428 [2024-11-15 12:47:30.762606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.687 [2024-11-15 12:47:30.770358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.687 [2024-11-15 12:47:30.770393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.687 [2024-11-15 12:47:30.770420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.687 [2024-11-15 12:47:30.778339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.687 [2024-11-15 12:47:30.778373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.687 [2024-11-15 12:47:30.778400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.687 [2024-11-15 12:47:30.786224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.687 [2024-11-15 12:47:30.786272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.687 [2024-11-15 12:47:30.786298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.687 [2024-11-15 12:47:30.794146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.687 [2024-11-15 12:47:30.794180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.687 [2024-11-15 12:47:30.794208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.687 [2024-11-15 12:47:30.801975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.687 [2024-11-15 12:47:30.802009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.687 [2024-11-15 12:47:30.802037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.687 [2024-11-15 12:47:30.808200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.687 [2024-11-15 12:47:30.808234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.687 [2024-11-15 12:47:30.808273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.812165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.812198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.812225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.816222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.816254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.816282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.821026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.821074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.821101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.825947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.825979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.826019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.831199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.831232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.831259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.835061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.835092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.835119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.840611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.840643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.840671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.846731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.846764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.846791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.853858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.853897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.853924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.860544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.860577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.860604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.868514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.868563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.868590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.876270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.876318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.876346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.882863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.882896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.882923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.889716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.889772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.889815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.896398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.896431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.896460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.902468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.902501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.902529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.906050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.906082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.906110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.912092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.912124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.912165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.918357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.918388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.918415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.926538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.926570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.926612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.932952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.932985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.933013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.938346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.938379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.938406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.944039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.944072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.944100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.688 [2024-11-15 12:47:30.949457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.688 [2024-11-15 12:47:30.949489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.688 [2024-11-15 12:47:30.949517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.689 [2024-11-15 12:47:30.955788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.689 [2024-11-15 12:47:30.955821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.689 [2024-11-15 12:47:30.955849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.689 [2024-11-15 12:47:30.963514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.689 [2024-11-15 12:47:30.963547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.689 [2024-11-15 12:47:30.963582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.689 [2024-11-15 12:47:30.969906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.689 [2024-11-15 12:47:30.969938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.689 [2024-11-15 12:47:30.969966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.689 [2024-11-15 12:47:30.976819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.689 [2024-11-15 12:47:30.976853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.689 [2024-11-15 12:47:30.976881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.689 [2024-11-15 12:47:30.983210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.689 [2024-11-15 12:47:30.983244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.689 [2024-11-15 12:47:30.983271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.689 [2024-11-15 12:47:30.988513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.689 [2024-11-15 12:47:30.988545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.689 [2024-11-15 12:47:30.988572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.689 [2024-11-15 12:47:30.993808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.689 [2024-11-15 12:47:30.993841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.689 [2024-11-15 12:47:30.993869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.689 [2024-11-15 12:47:30.999362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.689 [2024-11-15 12:47:30.999395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.689 [2024-11-15 12:47:30.999423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.689 [2024-11-15 12:47:31.004772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.689 [2024-11-15 12:47:31.004805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.689 [2024-11-15 12:47:31.004831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.689 [2024-11-15 12:47:31.010376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.689 [2024-11-15 12:47:31.010409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.689 [2024-11-15 12:47:31.010437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.689 [2024-11-15 12:47:31.015956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.689 [2024-11-15 12:47:31.015995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.689 [2024-11-15 12:47:31.016024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.689 [2024-11-15 12:47:31.021360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.689 [2024-11-15 12:47:31.021393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.689 [2024-11-15 12:47:31.021420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.689 [2024-11-15 12:47:31.026553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.689 [2024-11-15 12:47:31.026586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.689 [2024-11-15 12:47:31.026613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.948 [2024-11-15 12:47:31.031741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.948 [2024-11-15 12:47:31.031775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.948 [2024-11-15 12:47:31.031803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.948 [2024-11-15 12:47:31.035544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.948 [2024-11-15 12:47:31.035578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.948 [2024-11-15 12:47:31.035605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.948 [2024-11-15 12:47:31.039597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.948 [2024-11-15 12:47:31.039644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.948 [2024-11-15 12:47:31.039671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.948 5366.00 IOPS, 670.75 MiB/s [2024-11-15T11:47:31.292Z] [2024-11-15 12:47:31.045637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.948 [2024-11-15 12:47:31.045670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.948 [2024-11-15 12:47:31.045711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.948 [2024-11-15 12:47:31.051268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.948 [2024-11-15 12:47:31.051300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.948 [2024-11-15 12:47:31.051326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.948 [2024-11-15 12:47:31.056582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.948 [2024-11-15 12:47:31.056615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.948 [2024-11-15 12:47:31.056650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.948 [2024-11-15 12:47:31.060363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.948 [2024-11-15 12:47:31.060398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.948 [2024-11-15 12:47:31.060425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.948 [2024-11-15 12:47:31.064704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.948 [2024-11-15 12:47:31.064745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.948 [2024-11-15 12:47:31.064780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.948 [2024-11-15 12:47:31.069860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.948 [2024-11-15 12:47:31.069894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.948 [2024-11-15 12:47:31.069922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.948 [2024-11-15 12:47:31.074953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.948 [2024-11-15 12:47:31.074986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.948 [2024-11-15 12:47:31.075014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.948 [2024-11-15 12:47:31.080906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.080939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.080967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.086494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.086527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.086554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.091802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.091835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.091862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.097032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.097064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.097092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.102291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.102331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.102360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.107141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.107174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.107202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.110881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.110913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.110941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.115454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.115487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.115516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.120738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.120771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.120799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.124262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.124295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.124322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.130674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.130707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.130748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.137366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.137413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.137440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.143854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.143887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.143933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.150455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.150502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.150528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.157040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.157087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.157115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.163169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.163203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.163230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.166850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.166883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.166911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.171907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.171939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.171967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.176339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.176373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.176400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.182371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.182404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.182433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.189686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.189727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.189757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.197220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.197267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.949 [2024-11-15 12:47:31.197303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.949 [2024-11-15 12:47:31.205180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.949 [2024-11-15 12:47:31.205214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.205241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.209556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.209590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.209617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.216768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.216815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.216842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.222957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.222987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.223012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.228783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.228816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.228844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.234726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.234771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.234796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.239562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.239594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.239622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.243911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.243944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.243972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.247478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.247516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.247544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.252104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.252152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.252195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.258492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.258525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.258554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.262539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.262586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.262612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.268594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.268625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.268651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.273786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.273818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.273846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.279055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.279102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.279128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.283747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.283780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.283807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.950 [2024-11-15 12:47:31.288158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:50.950 [2024-11-15 12:47:31.288191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.950 [2024-11-15 12:47:31.288218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.209 [2024-11-15 12:47:31.292380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.209 [2024-11-15 12:47:31.292414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.209 [2024-11-15 12:47:31.292441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.209 [2024-11-15 12:47:31.296848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.209 [2024-11-15 12:47:31.296880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.209 [2024-11-15 12:47:31.296910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.209 [2024-11-15 12:47:31.302223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.209 [2024-11-15 12:47:31.302255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.209 [2024-11-15 12:47:31.302295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.209 [2024-11-15 12:47:31.307803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.209 [2024-11-15 12:47:31.307835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.209 [2024-11-15 12:47:31.307877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.209 [2024-11-15 12:47:31.313265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.209 [2024-11-15 12:47:31.313299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.209 [2024-11-15 12:47:31.313327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.209 [2024-11-15 12:47:31.319048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.209 [2024-11-15 12:47:31.319094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.209 [2024-11-15 12:47:31.319121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.209 [2024-11-15 12:47:31.325384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.209 [2024-11-15 12:47:31.325417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.209 [2024-11-15 12:47:31.325460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.209 [2024-11-15 12:47:31.331714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.209 [2024-11-15 12:47:31.331754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.331782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.339272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.339305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.339343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.347028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.347062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.347089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.354695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.354737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.354771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.360903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.360936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.360963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.366778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.366811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.366851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.373551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.373585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.373612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.380866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.380915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.380943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.387476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.387523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.387551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.393917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.393950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.393990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.400643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.400697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.400749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.407114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.407163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.407190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.412141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.412174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.412202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.417089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.417122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.417149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.421580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.421613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.421641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.424896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.424929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.424957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.430272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.430305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.430332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.435202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.435235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.435262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.439602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.439635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.439663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.443302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.443335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.443362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.448291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.448324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.448352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.454063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.454097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.454124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.459622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.459655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.459683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.465330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.465365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.465392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.470969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.471003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.471031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.475369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.475402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.475429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.482339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.482373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.210 [2024-11-15 12:47:31.482400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.210 [2024-11-15 12:47:31.488739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.210 [2024-11-15 12:47:31.488788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.211 [2024-11-15 12:47:31.488825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.211 [2024-11-15 12:47:31.492735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.211 [2024-11-15 12:47:31.492776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.211 [2024-11-15 12:47:31.492802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.211 [2024-11-15 12:47:31.499184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.211 [2024-11-15 12:47:31.499216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.211 [2024-11-15 12:47:31.499242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.211 [2024-11-15 12:47:31.505288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.211 [2024-11-15 12:47:31.505320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.211 [2024-11-15 12:47:31.505346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.211 [2024-11-15 12:47:31.510553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.211 [2024-11-15 12:47:31.510602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.211 [2024-11-15 12:47:31.510631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.211 [2024-11-15 12:47:31.515698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.211 [2024-11-15 12:47:31.515741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.211 [2024-11-15 12:47:31.515777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.211 [2024-11-15 12:47:31.521394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.211 [2024-11-15 12:47:31.521428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.211 [2024-11-15 12:47:31.521457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.211 [2024-11-15 12:47:31.527124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.211 [2024-11-15 12:47:31.527157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.211 [2024-11-15 12:47:31.527185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.211 [2024-11-15 12:47:31.532931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.211 [2024-11-15 12:47:31.532964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.211 [2024-11-15 12:47:31.532992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.211 [2024-11-15 12:47:31.538665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.211 [2024-11-15 12:47:31.538697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.211 [2024-11-15 12:47:31.538732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.211 [2024-11-15 12:47:31.545103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.211 [2024-11-15 12:47:31.545135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.211 [2024-11-15 12:47:31.545162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.211 [2024-11-15 12:47:31.549775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.211 [2024-11-15 12:47:31.549808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.211 [2024-11-15 12:47:31.549836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.470 [2024-11-15 12:47:31.553875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.470 [2024-11-15 12:47:31.553908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.470 [2024-11-15 12:47:31.553937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.470 [2024-11-15 12:47:31.558388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.470 [2024-11-15 12:47:31.558422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.470 [2024-11-15 12:47:31.558448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.470 [2024-11-15 12:47:31.564740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.470 [2024-11-15 12:47:31.564773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.470 [2024-11-15 12:47:31.564801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.470 [2024-11-15 12:47:31.571680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.470 [2024-11-15 12:47:31.571713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.470 [2024-11-15 12:47:31.571752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.470 [2024-11-15 12:47:31.577266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.470 [2024-11-15 12:47:31.577298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.470 [2024-11-15 12:47:31.577326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.470 [2024-11-15 12:47:31.581451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.470 [2024-11-15 12:47:31.581484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.470 [2024-11-15 12:47:31.581520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.470 [2024-11-15 12:47:31.585959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.470 [2024-11-15 12:47:31.585990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.470 [2024-11-15 12:47:31.586016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.470 [2024-11-15 12:47:31.591846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.470 [2024-11-15 12:47:31.591879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.470 [2024-11-15 12:47:31.591907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.470 [2024-11-15 12:47:31.599808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.470 [2024-11-15 12:47:31.599841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.470 [2024-11-15 12:47:31.599867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.470 [2024-11-15 12:47:31.606817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.470 [2024-11-15 12:47:31.606851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.470 [2024-11-15 12:47:31.606878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.470 [2024-11-15 12:47:31.611797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.470 [2024-11-15 12:47:31.611830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.470 [2024-11-15 12:47:31.611859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.470 [2024-11-15 12:47:31.618397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.470 [2024-11-15 12:47:31.618428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.470 [2024-11-15 12:47:31.618455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.625281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.625315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.625355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.631237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.631269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.631296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.638289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.638346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.638374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.644619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.644652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.644693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.651726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.651769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.651811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.659438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.659471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.659511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.667517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.667551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.667578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.674301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.674335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.674364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.679986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.680034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.680061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.685353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.685387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.685414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.689757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.689791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.689819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.694380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.694414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.694441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.700400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.700432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.700458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.707478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.707509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.707534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.714820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.714853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.714880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.722579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.722627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.722653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.728909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.728943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.728970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.734392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.734427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.734468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.739258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.739291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.739316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.744306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.744339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.744388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.750453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.750501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.750526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.758103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.758151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.758177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.764530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.764563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.764604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.771075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.771108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.771134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.777396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.777442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.777469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.783656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.783687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.471 [2024-11-15 12:47:31.783737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.471 [2024-11-15 12:47:31.789944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.471 [2024-11-15 12:47:31.789977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.472 [2024-11-15 12:47:31.790021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.472 [2024-11-15 12:47:31.795506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.472 [2024-11-15 12:47:31.795538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.472 [2024-11-15 12:47:31.795565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.472 [2024-11-15 12:47:31.800902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.472 [2024-11-15 12:47:31.800940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.472 [2024-11-15 12:47:31.800968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.472 [2024-11-15 12:47:31.806379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.472 [2024-11-15 12:47:31.806412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.472 [2024-11-15 12:47:31.806439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.472 [2024-11-15 12:47:31.811839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.472 [2024-11-15 12:47:31.811872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.472 [2024-11-15 12:47:31.811901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.817017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.731 [2024-11-15 12:47:31.817050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.731 [2024-11-15 12:47:31.817076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.822212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.731 [2024-11-15 12:47:31.822246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.731 [2024-11-15 12:47:31.822272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.827882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.731 [2024-11-15 12:47:31.827915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.731 [2024-11-15 12:47:31.827942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.832499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.731 [2024-11-15 12:47:31.832531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.731 [2024-11-15 12:47:31.832558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.840063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.731 [2024-11-15 12:47:31.840094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.731 [2024-11-15 12:47:31.840134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.847636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.731 [2024-11-15 12:47:31.847669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.731 [2024-11-15 12:47:31.847696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.855774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.731 [2024-11-15 12:47:31.855808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.731 [2024-11-15 12:47:31.855836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.863857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.731 [2024-11-15 12:47:31.863890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.731 [2024-11-15 12:47:31.863931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.872812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.731 [2024-11-15 12:47:31.872846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.731 [2024-11-15 12:47:31.872873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.880959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.731 [2024-11-15 12:47:31.880992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.731 [2024-11-15 12:47:31.881030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.888873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.731 [2024-11-15 12:47:31.888906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.731 [2024-11-15 12:47:31.888934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.895978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.731 [2024-11-15 12:47:31.896011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.731 [2024-11-15 12:47:31.896052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.903880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.731 [2024-11-15 12:47:31.903913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.731 [2024-11-15 12:47:31.903941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.911642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.731 [2024-11-15 12:47:31.911673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.731 [2024-11-15 12:47:31.911714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.916556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.731 [2024-11-15 12:47:31.916602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.731 [2024-11-15 12:47:31.916636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.731 [2024-11-15 12:47:31.923253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:31.923300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:31.923326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:31.930879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:31.930913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:31.930941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:31.937599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:31.937648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:31.937676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:31.941401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:31.941432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:31.941459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:31.948761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:31.948793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:31.948838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:31.955938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:31.955970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:31.956012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:31.961637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:31.961669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:31.961696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:31.967614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:31.967645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:31.967670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:31.973460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:31.973496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:31.973535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:31.978751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:31.978784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:31.978811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:31.983954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:31.983985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:31.984025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:31.988989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:31.989034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:31.989060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:31.994361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:31.994406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:31.994431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:32.000199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:32.000244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:32.000269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:32.006226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:32.006257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:32.006296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:32.011548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:32.011580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:32.011606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:32.016483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:32.016514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:32.016540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:32.021462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:32.021495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:32.021522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:32.027366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:32.027399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:32.027426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:32.032066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:32.032112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:32.032139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:32.037960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:32.038008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:32.038035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:51.732 [2024-11-15 12:47:32.044504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7fc2e0) 00:25:51.732 [2024-11-15 12:47:32.044552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.732 [2024-11-15 12:47:32.044578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:51.991 5341.00 IOPS, 667.62 MiB/s 00:25:51.991 Latency(us) 00:25:51.991 [2024-11-15T11:47:32.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.991 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:51.991 nvme0n1 : 2.05 5228.80 653.60 0.00 0.00 2997.19 879.88 46020.84 00:25:51.991 [2024-11-15T11:47:32.335Z] =================================================================================================================== 00:25:51.991 [2024-11-15T11:47:32.335Z] Total : 5228.80 653.60 0.00 0.00 2997.19 879.88 46020.84 00:25:51.991 { 00:25:51.991 "results": [ 00:25:51.991 { 00:25:51.991 "job": "nvme0n1", 00:25:51.991 "core_mask": "0x2", 00:25:51.991 "workload": "randread", 00:25:51.991 "status": "finished", 00:25:51.991 "queue_depth": 16, 00:25:51.991 "io_size": 131072, 00:25:51.991 "runtime": 2.045975, 00:25:51.991 "iops": 5228.80289348599, 00:25:51.991 "mibps": 653.6003616857488, 00:25:51.991 "io_failed": 0, 00:25:51.991 "io_timeout": 0, 00:25:51.991 "avg_latency_us": 2997.1867328611092, 00:25:51.991 "min_latency_us": 879.8814814814815, 00:25:51.991 "max_latency_us": 46020.83555555555 00:25:51.991 } 00:25:51.991 ], 00:25:51.991 "core_count": 1 00:25:51.991 } 00:25:51.991 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:51.991 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:51.991 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:51.991 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:51.991 | .driver_specific 00:25:51.991 | .nvme_error 00:25:51.991 | .status_code 00:25:51.991 | .command_transient_transport_error' 00:25:52.250 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 345 > 0 )) 00:25:52.250 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1127958 00:25:52.250 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1127958 ']' 00:25:52.250 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1127958 00:25:52.250 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:52.250 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.250 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1127958 00:25:52.250 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:52.250 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:52.250 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1127958' 00:25:52.250 killing process with pid 1127958 00:25:52.250 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1127958 00:25:52.250 Received shutdown signal, test time was about 2.000000 seconds 00:25:52.250 00:25:52.250 Latency(us) 00:25:52.250 [2024-11-15T11:47:32.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.250 [2024-11-15T11:47:32.594Z] =================================================================================================================== 00:25:52.250 [2024-11-15T11:47:32.594Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:52.250 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1127958 00:25:52.509 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:52.509 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:52.509 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:52.509 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:52.509 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:52.509 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1128368 00:25:52.509 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:52.509 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1128368 /var/tmp/bperf.sock 00:25:52.509 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1128368 ']' 00:25:52.509 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:52.509 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:52.509 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:52.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:52.509 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:52.509 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.509 [2024-11-15 12:47:32.687294] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:25:52.509 [2024-11-15 12:47:32.687388] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128368 ] 00:25:52.509 [2024-11-15 12:47:32.751652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.509 [2024-11-15 12:47:32.811060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.767 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.767 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:52.767 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:52.767 12:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:53.025 12:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:53.025 12:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.025 12:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:53.025 12:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.025 12:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:53.025 12:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:53.591 nvme0n1 00:25:53.591 12:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:53.591 12:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.591 12:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:53.591 12:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.591 12:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:53.591 12:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:53.591 Running I/O for 2 seconds... 00:25:53.591 [2024-11-15 12:47:33.858427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166ebfd0 00:25:53.591 [2024-11-15 12:47:33.859841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.591 [2024-11-15 12:47:33.859884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:53.591 [2024-11-15 12:47:33.871221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166ea680 00:25:53.591 [2024-11-15 12:47:33.872531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.591 [2024-11-15 12:47:33.872577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:53.591 [2024-11-15 12:47:33.883169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f3a28 00:25:53.591 [2024-11-15 12:47:33.884509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.591 [2024-11-15 12:47:33.884547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:53.591 [2024-11-15 12:47:33.894055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166ed4e8 00:25:53.591 [2024-11-15 12:47:33.895271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.591 [2024-11-15 12:47:33.895317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:53.591 [2024-11-15 12:47:33.905901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f96f8 00:25:53.591 [2024-11-15 12:47:33.906672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.591 [2024-11-15 12:47:33.906739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:53.591 [2024-11-15 12:47:33.919652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f6cc8 00:25:53.591 [2024-11-15 12:47:33.921132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.591 [2024-11-15 12:47:33.921162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:53.591 [2024-11-15 12:47:33.931001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f6cc8 00:25:53.591 [2024-11-15 12:47:33.932368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.591 [2024-11-15 12:47:33.932429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:33.942622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f1430 00:25:53.849 [2024-11-15 12:47:33.943695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:33.943764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:33.954371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f0bc0 00:25:53.849 [2024-11-15 12:47:33.955575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:33.955629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:33.969017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166e5ec8 00:25:53.849 [2024-11-15 12:47:33.970874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:33.970921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:33.981035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166fcdd0 00:25:53.849 [2024-11-15 12:47:33.982706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:33.982770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:33.989405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166eff18 00:25:53.849 [2024-11-15 12:47:33.990361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:33.990412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.003638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166eb328 00:25:53.849 [2024-11-15 12:47:34.004939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.004970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.014473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166fc128 00:25:53.849 [2024-11-15 12:47:34.015609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.015662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.025497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166edd58 00:25:53.849 [2024-11-15 12:47:34.026854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.026887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.037163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8e88 00:25:53.849 [2024-11-15 12:47:34.037873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.037919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.050811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166ee5c8 00:25:53.849 [2024-11-15 12:47:34.052358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.052402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.062166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166fe720 00:25:53.849 [2024-11-15 12:47:34.063147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.063192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.073360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166ebb98 00:25:53.849 [2024-11-15 12:47:34.074230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.074275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.085634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166ef270 00:25:53.849 [2024-11-15 12:47:34.086819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.086865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.097955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166e8d30 00:25:53.849 [2024-11-15 12:47:34.099280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.099324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.109875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166fb048 00:25:53.849 [2024-11-15 12:47:34.111127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.111173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.121015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166e6fa8 00:25:53.849 [2024-11-15 12:47:34.122267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.122298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.135328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:53.849 [2024-11-15 12:47:34.135523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.135578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.149246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:53.849 [2024-11-15 12:47:34.149439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.149496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.163024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:53.849 [2024-11-15 12:47:34.163227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.163271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.176962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:53.849 [2024-11-15 12:47:34.177184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.177241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:53.849 [2024-11-15 12:47:34.190651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:53.849 [2024-11-15 12:47:34.190861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.849 [2024-11-15 12:47:34.190922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.204599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.204804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.204851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.218346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.218552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.218594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.232124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.232321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.232380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.245904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.246106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.246160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.259754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.259955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.260015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.273612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.273818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.273876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.287623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.287845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.287905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.301611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.301827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.301884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.315631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.315857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.315914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.329556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.329761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.329826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.343315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.343499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.343527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.357159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.357353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.357396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.370782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.371003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.371064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.384618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.384841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.384884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.398544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.398746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.398804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.107 [2024-11-15 12:47:34.412453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.107 [2024-11-15 12:47:34.412647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.107 [2024-11-15 12:47:34.412702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.108 [2024-11-15 12:47:34.426241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.108 [2024-11-15 12:47:34.426440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.108 [2024-11-15 12:47:34.426499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.108 [2024-11-15 12:47:34.440244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.108 [2024-11-15 12:47:34.440440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.108 [2024-11-15 12:47:34.440496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.365 [2024-11-15 12:47:34.454039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.365 [2024-11-15 12:47:34.454264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.365 [2024-11-15 12:47:34.454307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.365 [2024-11-15 12:47:34.467833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.365 [2024-11-15 12:47:34.468043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.365 [2024-11-15 12:47:34.468072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.365 [2024-11-15 12:47:34.481870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.365 [2024-11-15 12:47:34.482068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.365 [2024-11-15 12:47:34.482111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.365 [2024-11-15 12:47:34.495743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.365 [2024-11-15 12:47:34.495951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.365 [2024-11-15 12:47:34.495996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.365 [2024-11-15 12:47:34.509593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.365 [2024-11-15 12:47:34.509813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.365 [2024-11-15 12:47:34.509858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.365 [2024-11-15 12:47:34.523324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.365 [2024-11-15 12:47:34.523527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.365 [2024-11-15 12:47:34.523570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.365 [2024-11-15 12:47:34.537314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.365 [2024-11-15 12:47:34.537518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.365 [2024-11-15 12:47:34.537577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.365 [2024-11-15 12:47:34.551088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.365 [2024-11-15 12:47:34.551313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.365 [2024-11-15 12:47:34.551368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.365 [2024-11-15 12:47:34.565010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.365 [2024-11-15 12:47:34.565231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.365 [2024-11-15 12:47:34.565288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.365 [2024-11-15 12:47:34.578952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.365 [2024-11-15 12:47:34.579155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.365 [2024-11-15 12:47:34.579198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.365 [2024-11-15 12:47:34.592810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.365 [2024-11-15 12:47:34.593008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.365 [2024-11-15 12:47:34.593066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.366 [2024-11-15 12:47:34.606738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.366 [2024-11-15 12:47:34.606938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.366 [2024-11-15 12:47:34.606995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.366 [2024-11-15 12:47:34.620523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.366 [2024-11-15 12:47:34.620714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.366 [2024-11-15 12:47:34.620750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.366 [2024-11-15 12:47:34.634437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.366 [2024-11-15 12:47:34.634644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.366 [2024-11-15 12:47:34.634701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.366 [2024-11-15 12:47:34.648424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.366 [2024-11-15 12:47:34.648625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.366 [2024-11-15 12:47:34.648667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.366 [2024-11-15 12:47:34.662141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.366 [2024-11-15 12:47:34.662338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.366 [2024-11-15 12:47:34.662396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.366 [2024-11-15 12:47:34.675952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.366 [2024-11-15 12:47:34.676154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.366 [2024-11-15 12:47:34.676181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.366 [2024-11-15 12:47:34.689957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.366 [2024-11-15 12:47:34.690173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.366 [2024-11-15 12:47:34.690236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.366 [2024-11-15 12:47:34.703868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.366 [2024-11-15 12:47:34.704072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.366 [2024-11-15 12:47:34.704129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.624 [2024-11-15 12:47:34.717526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.624 [2024-11-15 12:47:34.717743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.624 [2024-11-15 12:47:34.717801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.624 [2024-11-15 12:47:34.731193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.624 [2024-11-15 12:47:34.731402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.624 [2024-11-15 12:47:34.731430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.624 [2024-11-15 12:47:34.744900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.624 [2024-11-15 12:47:34.745104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.624 [2024-11-15 12:47:34.745161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.624 [2024-11-15 12:47:34.758836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.624 [2024-11-15 12:47:34.759039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.624 [2024-11-15 12:47:34.759068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.624 [2024-11-15 12:47:34.772432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.624 [2024-11-15 12:47:34.772626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.624 [2024-11-15 12:47:34.772668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.624 [2024-11-15 12:47:34.785933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.624 [2024-11-15 12:47:34.786145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.624 [2024-11-15 12:47:34.786188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.624 [2024-11-15 12:47:34.799371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.625 [2024-11-15 12:47:34.799568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.625 [2024-11-15 12:47:34.799613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.625 [2024-11-15 12:47:34.812918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.625 [2024-11-15 12:47:34.813158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.625 [2024-11-15 12:47:34.813201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.625 [2024-11-15 12:47:34.826599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.625 [2024-11-15 12:47:34.826823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.625 [2024-11-15 12:47:34.826870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.625 [2024-11-15 12:47:34.840228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.625 [2024-11-15 12:47:34.840427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.625 [2024-11-15 12:47:34.840485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.625 19185.00 IOPS, 74.94 MiB/s [2024-11-15T11:47:34.969Z] [2024-11-15 12:47:34.853677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.625 [2024-11-15 12:47:34.853902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.625 [2024-11-15 12:47:34.853947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.625 [2024-11-15 12:47:34.867200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.625 [2024-11-15 12:47:34.867402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.625 [2024-11-15 12:47:34.867455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.625 [2024-11-15 12:47:34.880818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.625 [2024-11-15 12:47:34.881023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.625 [2024-11-15 12:47:34.881083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.625 [2024-11-15 12:47:34.894323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.625 [2024-11-15 12:47:34.894542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.625 [2024-11-15 12:47:34.894572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.625 [2024-11-15 12:47:34.907918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.625 [2024-11-15 12:47:34.908139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.625 [2024-11-15 12:47:34.908183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.625 [2024-11-15 12:47:34.921452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.625 [2024-11-15 12:47:34.921647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.625 [2024-11-15 12:47:34.921676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.625 [2024-11-15 12:47:34.934976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.625 [2024-11-15 12:47:34.935177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.625 [2024-11-15 12:47:34.935219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.625 [2024-11-15 12:47:34.948383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.625 [2024-11-15 12:47:34.948590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.625 [2024-11-15 12:47:34.948618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.625 [2024-11-15 12:47:34.961896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.625 [2024-11-15 12:47:34.962105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.625 [2024-11-15 12:47:34.962147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.883 [2024-11-15 12:47:34.975412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.883 [2024-11-15 12:47:34.975611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-11-15 12:47:34.975640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.883 [2024-11-15 12:47:34.988918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.883 [2024-11-15 12:47:34.989128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-11-15 12:47:34.989169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.883 [2024-11-15 12:47:35.002417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.883 [2024-11-15 12:47:35.002615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-11-15 12:47:35.002659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.883 [2024-11-15 12:47:35.015927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.883 [2024-11-15 12:47:35.016154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-11-15 12:47:35.016183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.883 [2024-11-15 12:47:35.029371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.883 [2024-11-15 12:47:35.029567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-11-15 12:47:35.029611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.883 [2024-11-15 12:47:35.042938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.883 [2024-11-15 12:47:35.043151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-11-15 12:47:35.043203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.883 [2024-11-15 12:47:35.056378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.883 [2024-11-15 12:47:35.056572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-11-15 12:47:35.056615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.883 [2024-11-15 12:47:35.069805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.883 [2024-11-15 12:47:35.070018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.883 [2024-11-15 12:47:35.070048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.883 [2024-11-15 12:47:35.083516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.883 [2024-11-15 12:47:35.083715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-11-15 12:47:35.083785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.884 [2024-11-15 12:47:35.096944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.884 [2024-11-15 12:47:35.097167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-11-15 12:47:35.097195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.884 [2024-11-15 12:47:35.110341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.884 [2024-11-15 12:47:35.110540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-11-15 12:47:35.110584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.884 [2024-11-15 12:47:35.123823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.884 [2024-11-15 12:47:35.124032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-11-15 12:47:35.124075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.884 [2024-11-15 12:47:35.137215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.884 [2024-11-15 12:47:35.137413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-11-15 12:47:35.137456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.884 [2024-11-15 12:47:35.150649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.884 [2024-11-15 12:47:35.150883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-11-15 12:47:35.150929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.884 [2024-11-15 12:47:35.164257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.884 [2024-11-15 12:47:35.164468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-11-15 12:47:35.164510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.884 [2024-11-15 12:47:35.177814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.884 [2024-11-15 12:47:35.178016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-11-15 12:47:35.178075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.884 [2024-11-15 12:47:35.191297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.884 [2024-11-15 12:47:35.191496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-11-15 12:47:35.191537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.884 [2024-11-15 12:47:35.204779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.884 [2024-11-15 12:47:35.204959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-11-15 12:47:35.205024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.884 [2024-11-15 12:47:35.218271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:54.884 [2024-11-15 12:47:35.218474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.884 [2024-11-15 12:47:35.218502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.231748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.231959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.232019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.245284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.245480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.245523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.258666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.258893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.258936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.272206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.272401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.272429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.285688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.285914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.285959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.299091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.299284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.299328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.312553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.312751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.312816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.326076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.326272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.326316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.339544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.339748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.339794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.352788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.352987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.353047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.366237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.366462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.366492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.379678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.379912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.379960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.393102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.393299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.393349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.406573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.406799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.406847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.420244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.420439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.420482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.433822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.434025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.434084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.447337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.447536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.447564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.460709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.460927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.460974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.143 [2024-11-15 12:47:35.474219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.143 [2024-11-15 12:47:35.474412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.143 [2024-11-15 12:47:35.474455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.402 [2024-11-15 12:47:35.488077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.402 [2024-11-15 12:47:35.488273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.402 [2024-11-15 12:47:35.488317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.402 [2024-11-15 12:47:35.501484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.402 [2024-11-15 12:47:35.501682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.402 [2024-11-15 12:47:35.501758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.402 [2024-11-15 12:47:35.515048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.402 [2024-11-15 12:47:35.515250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.402 [2024-11-15 12:47:35.515293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.402 [2024-11-15 12:47:35.528569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.402 [2024-11-15 12:47:35.528802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.402 [2024-11-15 12:47:35.528832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.402 [2024-11-15 12:47:35.542171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.402 [2024-11-15 12:47:35.542369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.402 [2024-11-15 12:47:35.542412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.402 [2024-11-15 12:47:35.555584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.402 [2024-11-15 12:47:35.555803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.402 [2024-11-15 12:47:35.555832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.402 [2024-11-15 12:47:35.569209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.402 [2024-11-15 12:47:35.569405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.402 [2024-11-15 12:47:35.569449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.402 [2024-11-15 12:47:35.582728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.402 [2024-11-15 12:47:35.582925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.402 [2024-11-15 12:47:35.582970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.402 [2024-11-15 12:47:35.596181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.402 [2024-11-15 12:47:35.596378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.402 [2024-11-15 12:47:35.596421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.402 [2024-11-15 12:47:35.609661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.402 [2024-11-15 12:47:35.609900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.402 [2024-11-15 12:47:35.609949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.403 [2024-11-15 12:47:35.623045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.403 [2024-11-15 12:47:35.623238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.403 [2024-11-15 12:47:35.623280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.403 [2024-11-15 12:47:35.636399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.403 [2024-11-15 12:47:35.636617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.403 [2024-11-15 12:47:35.636661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.403 [2024-11-15 12:47:35.649818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.403 [2024-11-15 12:47:35.650028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.403 [2024-11-15 12:47:35.650072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.403 [2024-11-15 12:47:35.663222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.403 [2024-11-15 12:47:35.663440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.403 [2024-11-15 12:47:35.663501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.403 [2024-11-15 12:47:35.676655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.403 [2024-11-15 12:47:35.676880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.403 [2024-11-15 12:47:35.676911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.403 [2024-11-15 12:47:35.690108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.403 [2024-11-15 12:47:35.690305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.403 [2024-11-15 12:47:35.690348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.403 [2024-11-15 12:47:35.703603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.403 [2024-11-15 12:47:35.703838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.403 [2024-11-15 12:47:35.703869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.403 [2024-11-15 12:47:35.717149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.403 [2024-11-15 12:47:35.717361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.403 [2024-11-15 12:47:35.717405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.403 [2024-11-15 12:47:35.730600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.403 [2024-11-15 12:47:35.730820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.403 [2024-11-15 12:47:35.730864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.403 [2024-11-15 12:47:35.744154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.403 [2024-11-15 12:47:35.744354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.403 [2024-11-15 12:47:35.744404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.661 [2024-11-15 12:47:35.757631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.661 [2024-11-15 12:47:35.757827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-11-15 12:47:35.757872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.661 [2024-11-15 12:47:35.771236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.661 [2024-11-15 12:47:35.771444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-11-15 12:47:35.771495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.661 [2024-11-15 12:47:35.784613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.661 [2024-11-15 12:47:35.784836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-11-15 12:47:35.784879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.661 [2024-11-15 12:47:35.798118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.661 [2024-11-15 12:47:35.798316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-11-15 12:47:35.798344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.661 [2024-11-15 12:47:35.811575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.661 [2024-11-15 12:47:35.811796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-11-15 12:47:35.811839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.661 [2024-11-15 12:47:35.824973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.661 [2024-11-15 12:47:35.825175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-11-15 12:47:35.825203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.661 [2024-11-15 12:47:35.838392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.661 [2024-11-15 12:47:35.838587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-11-15 12:47:35.838629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.661 19067.50 IOPS, 74.48 MiB/s [2024-11-15T11:47:36.005Z] [2024-11-15 12:47:35.851975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25210) with pdu=0x2000166f8a50 00:25:55.661 [2024-11-15 12:47:35.852169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.661 [2024-11-15 12:47:35.852210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.661 00:25:55.661 Latency(us) 00:25:55.661 [2024-11-15T11:47:36.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.661 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:55.661 nvme0n1 : 2.01 19070.44 74.49 0.00 0.00 6696.42 2973.39 15728.64 00:25:55.661 [2024-11-15T11:47:36.005Z] =================================================================================================================== 00:25:55.661 [2024-11-15T11:47:36.005Z] Total : 19070.44 74.49 0.00 0.00 6696.42 2973.39 15728.64 00:25:55.661 { 00:25:55.661 "results": [ 00:25:55.661 { 00:25:55.661 "job": "nvme0n1", 00:25:55.661 "core_mask": "0x2", 00:25:55.661 "workload": "randwrite", 00:25:55.661 "status": "finished", 00:25:55.661 "queue_depth": 128, 00:25:55.661 "io_size": 4096, 00:25:55.661 "runtime": 2.008501, 00:25:55.661 "iops": 19070.441090146334, 00:25:55.661 "mibps": 74.49391050838412, 00:25:55.662 "io_failed": 0, 00:25:55.662 "io_timeout": 0, 00:25:55.662 "avg_latency_us": 6696.421612851136, 00:25:55.662 "min_latency_us": 2973.392592592593, 00:25:55.662 "max_latency_us": 15728.64 00:25:55.662 } 00:25:55.662 ], 00:25:55.662 "core_count": 1 00:25:55.662 } 00:25:55.662 12:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:55.662 12:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:55.662 12:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:55.662 12:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:55.662 | .driver_specific 00:25:55.662 | .nvme_error 00:25:55.662 | .status_code 00:25:55.662 | .command_transient_transport_error' 00:25:55.919 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 150 > 0 )) 00:25:55.919 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1128368 00:25:55.919 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1128368 ']' 00:25:55.919 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1128368 00:25:55.919 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:55.919 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:55.919 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1128368 00:25:55.919 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:55.919 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:55.919 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1128368' 00:25:55.919 killing process with pid 1128368 00:25:55.919 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1128368 00:25:55.919 Received shutdown signal, test time was about 2.000000 seconds 00:25:55.919 00:25:55.919 Latency(us) 00:25:55.919 [2024-11-15T11:47:36.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.919 [2024-11-15T11:47:36.263Z] =================================================================================================================== 00:25:55.919 [2024-11-15T11:47:36.263Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:55.919 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1128368 00:25:56.177 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:56.177 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:56.177 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:56.177 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:56.177 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:56.177 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1128888 00:25:56.177 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:56.177 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1128888 /var/tmp/bperf.sock 00:25:56.177 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1128888 ']' 00:25:56.177 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:56.177 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.177 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:56.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:56.177 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.177 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.177 [2024-11-15 12:47:36.452944] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:25:56.177 [2024-11-15 12:47:36.453043] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128888 ] 00:25:56.177 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:56.177 Zero copy mechanism will not be used. 00:25:56.177 [2024-11-15 12:47:36.517858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.435 [2024-11-15 12:47:36.574567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.435 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:56.435 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:56.435 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:56.435 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:56.693 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:56.693 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.693 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.693 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.693 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.693 12:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.258 nvme0n1 00:25:57.258 12:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:57.258 12:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.258 12:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:57.258 12:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.258 12:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:57.258 12:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:57.258 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:57.258 Zero copy mechanism will not be used. 00:25:57.258 Running I/O for 2 seconds... 00:25:57.258 [2024-11-15 12:47:37.540895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.258 [2024-11-15 12:47:37.541009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.258 [2024-11-15 12:47:37.541069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.258 [2024-11-15 12:47:37.547450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.258 [2024-11-15 12:47:37.547538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.258 [2024-11-15 12:47:37.547577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.258 [2024-11-15 12:47:37.553344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.258 [2024-11-15 12:47:37.553429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.258 [2024-11-15 12:47:37.553465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.258 [2024-11-15 12:47:37.558629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.258 [2024-11-15 12:47:37.558753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.258 [2024-11-15 12:47:37.558791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.258 [2024-11-15 12:47:37.564545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.258 [2024-11-15 12:47:37.564639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.258 [2024-11-15 12:47:37.564677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.258 [2024-11-15 12:47:37.569934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.258 [2024-11-15 12:47:37.570034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.258 [2024-11-15 12:47:37.570071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.258 [2024-11-15 12:47:37.575047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.258 [2024-11-15 12:47:37.575142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.258 [2024-11-15 12:47:37.575179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.258 [2024-11-15 12:47:37.580128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.258 [2024-11-15 12:47:37.580213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.258 [2024-11-15 12:47:37.580257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.259 [2024-11-15 12:47:37.585141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.259 [2024-11-15 12:47:37.585225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.259 [2024-11-15 12:47:37.585262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.259 [2024-11-15 12:47:37.590327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.259 [2024-11-15 12:47:37.590423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.259 [2024-11-15 12:47:37.590459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.259 [2024-11-15 12:47:37.595861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.259 [2024-11-15 12:47:37.595956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.259 [2024-11-15 12:47:37.595994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.601676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.601782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.601820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.606676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.606771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.606808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.611802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.611912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.611947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.617245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.617402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.617432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.623523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.623661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.623692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.629870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.630012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.630049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.635552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.635886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.635918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.640574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.640858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.640889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.646780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.647090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.647121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.652613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.652951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.652982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.657438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.657770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.657801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.661931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.662225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.662255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.666289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.666541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.666571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.670432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.670621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.670651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.674582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.674796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.674826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.678895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.679115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.517 [2024-11-15 12:47:37.679145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.517 [2024-11-15 12:47:37.683676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.517 [2024-11-15 12:47:37.683906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.683936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.688400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.688618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.688649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.693047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.693252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.693283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.697490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.697695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.697733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.702088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.702285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.702315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.706735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.706937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.706967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.711861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.712074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.712103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.716424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.716645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.716675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.721329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.721537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.721567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.726190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.726389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.726419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.730351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.730558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.730589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.734546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.734781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.734811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.738741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.738970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.739000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.742965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.743178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.743208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.747202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.747418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.747448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.751728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.751950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.751985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.756355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.756554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.756584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.760962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.761169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.761198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.765559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.765788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.765818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.770123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.770326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.770357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.774746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.774969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.774999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.779385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.779610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.779640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.784470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.784657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.784688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.789431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.789638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.789668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.795103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.795388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.795418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.800215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.800445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.800475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.804550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.518 [2024-11-15 12:47:37.804742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.518 [2024-11-15 12:47:37.804773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.518 [2024-11-15 12:47:37.809089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.519 [2024-11-15 12:47:37.809289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.519 [2024-11-15 12:47:37.809319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.519 [2024-11-15 12:47:37.813521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.519 [2024-11-15 12:47:37.813739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.519 [2024-11-15 12:47:37.813769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.519 [2024-11-15 12:47:37.817851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.519 [2024-11-15 12:47:37.818126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.519 [2024-11-15 12:47:37.818157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.519 [2024-11-15 12:47:37.822296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.519 [2024-11-15 12:47:37.822506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.519 [2024-11-15 12:47:37.822536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.519 [2024-11-15 12:47:37.826704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.519 [2024-11-15 12:47:37.826932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.519 [2024-11-15 12:47:37.826962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.519 [2024-11-15 12:47:37.831164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.519 [2024-11-15 12:47:37.831405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.519 [2024-11-15 12:47:37.831435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.519 [2024-11-15 12:47:37.835571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.519 [2024-11-15 12:47:37.835783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.519 [2024-11-15 12:47:37.835813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.519 [2024-11-15 12:47:37.839846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.519 [2024-11-15 12:47:37.840029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.519 [2024-11-15 12:47:37.840059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.519 [2024-11-15 12:47:37.844801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.519 [2024-11-15 12:47:37.845136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.519 [2024-11-15 12:47:37.845165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.519 [2024-11-15 12:47:37.849954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.519 [2024-11-15 12:47:37.850171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.519 [2024-11-15 12:47:37.850201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.519 [2024-11-15 12:47:37.854567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.519 [2024-11-15 12:47:37.854817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.519 [2024-11-15 12:47:37.854847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.859799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.860019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.860050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.864000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.864209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.864240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.868102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.868275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.868306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.872260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.872426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.872462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.876380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.876572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.876602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.880494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.880677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.880708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.884696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.884879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.884909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.888924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.889106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.889136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.893059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.893228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.893259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.897199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.897382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.897412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.901337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.901523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.901554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.905491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.905656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.905686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.909612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.909816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.909846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.913770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.913962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.913992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.917992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.918172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.918202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.922172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.922343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.922373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.926308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.926477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.926507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.930423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.930605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.930634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.934536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.934709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.934750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.938638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.938833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.938864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.942807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.942980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.943010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.946941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.947129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.947159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.951041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.951227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.951257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.955187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.955375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.955404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.779 [2024-11-15 12:47:37.959320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.779 [2024-11-15 12:47:37.959504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.779 [2024-11-15 12:47:37.959534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:37.963429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:37.963612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:37.963642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:37.967557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:37.967732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:37.967762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:37.971674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:37.971852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:37.971882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:37.975832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:37.976009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:37.976039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:37.979971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:37.980153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:37.980188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:37.984109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:37.984290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:37.984320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:37.988262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:37.988441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:37.988470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:37.992431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:37.992605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:37.992634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:37.996546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:37.996736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:37.996766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.000653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.000847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.000877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.004854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.005028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.005058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.009450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.009695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.009735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.014442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.014649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.014680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.019939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.020226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.020256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.025627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.025809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.025841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.029911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.030115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.030146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.034060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.034223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.034253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.038461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.038637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.038668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.042732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.042913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.042943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.048011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.048225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.048255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.052603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.052802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.052833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.057896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.058178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.058208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.063494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.063757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.063787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.067782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.067943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.067974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.071961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.072116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.072146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.076160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.780 [2024-11-15 12:47:38.076337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.780 [2024-11-15 12:47:38.076367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.780 [2024-11-15 12:47:38.080461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.781 [2024-11-15 12:47:38.080626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.781 [2024-11-15 12:47:38.080657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.781 [2024-11-15 12:47:38.084662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.781 [2024-11-15 12:47:38.084837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.781 [2024-11-15 12:47:38.084867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.781 [2024-11-15 12:47:38.088853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.781 [2024-11-15 12:47:38.089019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.781 [2024-11-15 12:47:38.089049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.781 [2024-11-15 12:47:38.093017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.781 [2024-11-15 12:47:38.093191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.781 [2024-11-15 12:47:38.093221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.781 [2024-11-15 12:47:38.097986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.781 [2024-11-15 12:47:38.098142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.781 [2024-11-15 12:47:38.098178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.781 [2024-11-15 12:47:38.102410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.781 [2024-11-15 12:47:38.102625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.781 [2024-11-15 12:47:38.102656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.781 [2024-11-15 12:47:38.107003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.781 [2024-11-15 12:47:38.107214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.781 [2024-11-15 12:47:38.107244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.781 [2024-11-15 12:47:38.112269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.781 [2024-11-15 12:47:38.112563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.781 [2024-11-15 12:47:38.112593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.781 [2024-11-15 12:47:38.117805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:57.781 [2024-11-15 12:47:38.118009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.781 [2024-11-15 12:47:38.118039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.040 [2024-11-15 12:47:38.123141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.040 [2024-11-15 12:47:38.123310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.040 [2024-11-15 12:47:38.123341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.040 [2024-11-15 12:47:38.127374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.040 [2024-11-15 12:47:38.127561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.040 [2024-11-15 12:47:38.127592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.040 [2024-11-15 12:47:38.131644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.040 [2024-11-15 12:47:38.131826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.040 [2024-11-15 12:47:38.131856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.040 [2024-11-15 12:47:38.136003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.040 [2024-11-15 12:47:38.136192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.040 [2024-11-15 12:47:38.136223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.040 [2024-11-15 12:47:38.140362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.140519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.140549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.144800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.144974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.145004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.149232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.149461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.149490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.153492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.153648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.153678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.157949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.158168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.158198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.162470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.162674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.162704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.166839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.167044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.167074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.171257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.171436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.171466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.175712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.175941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.175971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.180162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.180354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.180385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.184591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.184787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.184817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.189109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.189269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.189299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.193478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.193648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.193678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.198043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.198228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.198258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.202246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.202428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.202458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.207238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.207458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.207488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.212422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.212662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.212692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.217881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.218140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.218177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.223133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.223347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.223377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.227296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.227552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.227582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.231662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.231823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.231855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.236128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.236309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.236340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.240474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.240650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.240681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.244787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.244953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.244983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.249232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.249388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.041 [2024-11-15 12:47:38.249419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.041 [2024-11-15 12:47:38.253730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.041 [2024-11-15 12:47:38.253885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.253916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.258128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.258293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.258324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.262578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.262732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.262767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.266958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.267149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.267179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.271460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.271678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.271726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.275917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.276120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.276150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.280364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.280543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.280573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.284838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.285017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.285047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.289242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.289408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.289438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.293565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.293767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.293798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.297895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.298091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.298121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.302956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.303236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.303267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.308056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.308295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.308325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.313622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.313912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.313943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.319180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.319363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.319393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.324988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.325249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.325279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.330873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.331151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.331181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.336983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.337280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.337311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.342970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.343252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.343288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.349016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.349182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.349212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.354976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.355156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.355186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.361457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.361671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.361703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.367620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.367905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.367936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.373645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.373894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.373926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.042 [2024-11-15 12:47:38.379667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.042 [2024-11-15 12:47:38.379901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.042 [2024-11-15 12:47:38.379932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.302 [2024-11-15 12:47:38.385801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.302 [2024-11-15 12:47:38.386103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.302 [2024-11-15 12:47:38.386133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.302 [2024-11-15 12:47:38.391615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.302 [2024-11-15 12:47:38.391884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.391915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.397693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.397981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.398012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.403450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.403670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.403702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.408498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.408743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.408774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.412873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.413091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.413123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.417188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.417418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.417449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.421633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.421885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.421916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.425936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.426149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.426179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.430325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.430535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.430565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.434704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.434932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.434963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.439166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.439368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.439400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.443800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.444033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.444065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.448216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.448440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.448471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.452504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.452735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.452767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.457206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.457456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.457487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.462376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.462671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.462702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.466655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.466868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.466899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.470892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.471097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.471128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.475059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.475263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.475300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.479264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.479471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.479502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.483666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.483982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.484013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.488800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.489060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.489091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.494180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.494404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.494435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.499920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.500115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.500147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.303 [2024-11-15 12:47:38.504111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.303 [2024-11-15 12:47:38.504317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.303 [2024-11-15 12:47:38.504349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.508589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.508797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.508828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.512943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.513161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.513192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.517482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.517704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.517747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.521946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.522169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.522200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.526407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.526647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.526678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.530910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.531114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.531146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.535480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.535696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.535735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.304 6541.00 IOPS, 817.62 MiB/s [2024-11-15T11:47:38.648Z] [2024-11-15 12:47:38.541327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.541546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.541582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.545871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.546066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.546098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.550194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.550399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.550430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.554474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.554667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.554704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.558840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.559018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.559049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.563413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.563596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.563628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.567956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.568152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.568184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.572410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.572613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.572644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.577009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.577210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.577241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.581544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.581755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.581786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.585916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.586058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.586089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.590441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.590552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.590588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.594952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.595120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.595151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.599454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.599582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.599613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.604035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.604160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.304 [2024-11-15 12:47:38.604191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.304 [2024-11-15 12:47:38.608570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.304 [2024-11-15 12:47:38.608703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.305 [2024-11-15 12:47:38.608743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.305 [2024-11-15 12:47:38.613011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.305 [2024-11-15 12:47:38.613111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.305 [2024-11-15 12:47:38.613146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.305 [2024-11-15 12:47:38.617438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.305 [2024-11-15 12:47:38.617543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.305 [2024-11-15 12:47:38.617577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.305 [2024-11-15 12:47:38.622043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.305 [2024-11-15 12:47:38.622154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.305 [2024-11-15 12:47:38.622187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.305 [2024-11-15 12:47:38.626654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.305 [2024-11-15 12:47:38.626760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.305 [2024-11-15 12:47:38.626801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.305 [2024-11-15 12:47:38.632133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.305 [2024-11-15 12:47:38.632259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.305 [2024-11-15 12:47:38.632291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.305 [2024-11-15 12:47:38.636336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.305 [2024-11-15 12:47:38.636459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.305 [2024-11-15 12:47:38.636490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.305 [2024-11-15 12:47:38.640500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.305 [2024-11-15 12:47:38.640634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.305 [2024-11-15 12:47:38.640665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.564 [2024-11-15 12:47:38.644800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.564 [2024-11-15 12:47:38.645006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.564 [2024-11-15 12:47:38.645036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.564 [2024-11-15 12:47:38.649609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.564 [2024-11-15 12:47:38.649838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.564 [2024-11-15 12:47:38.649869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.564 [2024-11-15 12:47:38.654685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.564 [2024-11-15 12:47:38.654919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.564 [2024-11-15 12:47:38.654949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.564 [2024-11-15 12:47:38.660184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.660385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.660415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.665534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.665707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.665745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.669875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.670000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.670030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.674314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.674448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.674484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.678615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.678758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.678789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.682846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.683005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.683036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.687546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.687751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.687782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.692525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.692706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.692748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.697772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.697922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.697952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.703205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.703401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.703432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.709022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.709124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.709160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.714426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.714545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.714576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.720864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.721081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.721111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.726449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.726547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.726583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.731535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.731636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.731674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.735796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.735918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.735949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.739977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.740079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.740115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.744278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.744445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.744476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.749340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.749522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.749553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.754425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.754614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.754644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.760152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.760312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.760342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.764918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.765026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.765061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.769287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.769398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.769433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.773566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.773696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.773734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.777769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.777911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.777942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.782367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.782578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.782608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.787481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.787664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.565 [2024-11-15 12:47:38.787694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.565 [2024-11-15 12:47:38.792062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.565 [2024-11-15 12:47:38.792229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.792259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.798017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.798176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.798207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.802768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.802899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.802944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.807020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.807162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.807192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.811316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.811449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.811479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.815537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.815652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.815684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.819774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.819916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.819946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.823942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.824095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.824125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.828150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.828304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.828334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.832908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.833013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.833047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.837986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.838084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.838120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.842370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.842496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.842528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.847393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.847632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.847662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.852543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.852726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.852757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.858881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.859069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.859100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.863322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.863430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.863464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.867562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.867728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.867759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.871914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.872076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.872106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.876290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.876445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.876475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.880598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.880715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.880757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.885058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.885168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.885202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.889428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.889597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.889627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.893799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.893946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.893977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.898035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.898188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.898218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.566 [2024-11-15 12:47:38.902276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.566 [2024-11-15 12:47:38.902451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.566 [2024-11-15 12:47:38.902482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.906525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.906640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.906672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.910820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.910962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.910993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.915507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.915749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.915780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.920616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.920822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.920860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.926105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.926280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.926311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.931479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.931623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.931653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.936264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.936413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.936443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.941420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.941612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.941643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.946414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.946570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.946601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.951521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.951649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.951681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.956547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.956733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.956764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.961622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.961809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.961839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.966614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.966770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.966802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.971697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.971845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.971876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.976992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.977170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.977200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.982110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.982260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.982291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.987159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.987332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.987362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.992839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.993040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.993071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:38.998383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:38.998525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:38.998556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:39.003436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:39.003596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:39.003626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:39.008628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:39.008812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:39.008842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:39.013864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:39.014020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:39.014051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:39.018945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:39.019105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-11-15 12:47:39.019136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.827 [2024-11-15 12:47:39.024021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.827 [2024-11-15 12:47:39.024181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.024212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.029075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.029255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.029286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.034159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.034337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.034368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.039243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.039409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.039439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.044314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.044495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.044525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.049295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.049467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.049497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.054388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.054567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.054604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.059584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.059770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.059801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.064542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.064705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.064745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.069628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.069796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.069827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.074715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.074880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.074911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.079847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.080072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.080102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.084985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.085159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.085190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.090082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.090252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.090282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.095161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.095320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.095351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.100254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.100407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.100438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.105298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.105482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.105513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.110330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.110530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.110561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.115449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.115622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.115652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.120482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.120691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.120730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.125620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.125782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.125813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.130849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.130989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.131020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.135933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.136081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.136112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.141022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.141175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.141206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.146089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.146259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.146290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.151162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.151331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.151361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.156250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.156407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.828 [2024-11-15 12:47:39.156438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.828 [2024-11-15 12:47:39.161253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.828 [2024-11-15 12:47:39.161489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.829 [2024-11-15 12:47:39.161519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.829 [2024-11-15 12:47:39.166416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:58.829 [2024-11-15 12:47:39.166589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.829 [2024-11-15 12:47:39.166619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.171525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.171785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.088 [2024-11-15 12:47:39.171816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.176616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.176879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.088 [2024-11-15 12:47:39.176911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.181766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.181945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.088 [2024-11-15 12:47:39.181976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.186846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.187031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.088 [2024-11-15 12:47:39.187069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.191852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.192099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.088 [2024-11-15 12:47:39.192130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.196855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.197053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.088 [2024-11-15 12:47:39.197084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.201976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.202151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.088 [2024-11-15 12:47:39.202182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.207186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.207365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.088 [2024-11-15 12:47:39.207396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.212293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.212565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.088 [2024-11-15 12:47:39.212596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.217294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.217462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.088 [2024-11-15 12:47:39.217493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.222270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.222420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.088 [2024-11-15 12:47:39.222450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.227407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.227611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.088 [2024-11-15 12:47:39.227641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.232531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.232706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.088 [2024-11-15 12:47:39.232744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.237499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.237665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.088 [2024-11-15 12:47:39.237695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.242585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.242751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.088 [2024-11-15 12:47:39.242781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.088 [2024-11-15 12:47:39.247728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.088 [2024-11-15 12:47:39.247938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.247968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.252881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.253032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.253062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.257922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.258087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.258117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.262912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.263055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.263085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.268119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.268250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.268280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.273133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.273345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.273375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.278243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.278427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.278457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.283311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.283509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.283539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.288306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.288476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.288506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.293287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.293435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.293465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.298355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.298512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.298541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.303397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.303577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.303607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.308484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.308677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.308707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.313590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.313740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.313770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.318673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.318833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.318871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.323734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.323897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.323927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.328827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.328983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.329013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.333907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.334052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.334082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.338875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.339028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.339058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.343947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.344088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.344118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.349014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.349171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.349202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.354098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.354266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.354295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.359176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.359332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.359362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.364346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.364549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.364579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.369398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.369607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.369637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.374497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.374694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.089 [2024-11-15 12:47:39.374731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.089 [2024-11-15 12:47:39.379634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.089 [2024-11-15 12:47:39.379796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.090 [2024-11-15 12:47:39.379827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.090 [2024-11-15 12:47:39.384693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.090 [2024-11-15 12:47:39.384879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.090 [2024-11-15 12:47:39.384909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.090 [2024-11-15 12:47:39.389768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.090 [2024-11-15 12:47:39.389939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.090 [2024-11-15 12:47:39.389969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.090 [2024-11-15 12:47:39.394758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.090 [2024-11-15 12:47:39.394877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.090 [2024-11-15 12:47:39.394911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.090 [2024-11-15 12:47:39.399823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.090 [2024-11-15 12:47:39.399978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.090 [2024-11-15 12:47:39.400008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.090 [2024-11-15 12:47:39.405038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.090 [2024-11-15 12:47:39.405200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.090 [2024-11-15 12:47:39.405230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.090 [2024-11-15 12:47:39.410170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.090 [2024-11-15 12:47:39.410399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.090 [2024-11-15 12:47:39.410429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.090 [2024-11-15 12:47:39.415334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.090 [2024-11-15 12:47:39.415454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.090 [2024-11-15 12:47:39.415485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.090 [2024-11-15 12:47:39.420537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.090 [2024-11-15 12:47:39.420674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.090 [2024-11-15 12:47:39.420704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.090 [2024-11-15 12:47:39.425748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.090 [2024-11-15 12:47:39.425904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.090 [2024-11-15 12:47:39.425934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.349 [2024-11-15 12:47:39.430825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.349 [2024-11-15 12:47:39.430966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.349 [2024-11-15 12:47:39.430996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.349 [2024-11-15 12:47:39.436026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.349 [2024-11-15 12:47:39.436182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.349 [2024-11-15 12:47:39.436211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.349 [2024-11-15 12:47:39.441029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.349 [2024-11-15 12:47:39.441147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.349 [2024-11-15 12:47:39.441178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.349 [2024-11-15 12:47:39.446172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.349 [2024-11-15 12:47:39.446329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.349 [2024-11-15 12:47:39.446359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.349 [2024-11-15 12:47:39.451277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.349 [2024-11-15 12:47:39.451412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.349 [2024-11-15 12:47:39.451449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.349 [2024-11-15 12:47:39.456370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.349 [2024-11-15 12:47:39.456505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.349 [2024-11-15 12:47:39.456535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.349 [2024-11-15 12:47:39.461561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.349 [2024-11-15 12:47:39.461704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.349 [2024-11-15 12:47:39.461743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.349 [2024-11-15 12:47:39.466636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.349 [2024-11-15 12:47:39.466806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.349 [2024-11-15 12:47:39.466837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.349 [2024-11-15 12:47:39.471852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.349 [2024-11-15 12:47:39.471996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.349 [2024-11-15 12:47:39.472026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.349 [2024-11-15 12:47:39.477049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.349 [2024-11-15 12:47:39.477203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.349 [2024-11-15 12:47:39.477233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.349 [2024-11-15 12:47:39.481589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.349 [2024-11-15 12:47:39.481748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.349 [2024-11-15 12:47:39.481778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.349 [2024-11-15 12:47:39.485858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.349 [2024-11-15 12:47:39.485990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.349 [2024-11-15 12:47:39.486021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.349 [2024-11-15 12:47:39.490135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.349 [2024-11-15 12:47:39.490302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.349 [2024-11-15 12:47:39.490331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.349 [2024-11-15 12:47:39.494450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.349 [2024-11-15 12:47:39.494597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.350 [2024-11-15 12:47:39.494634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.350 [2024-11-15 12:47:39.498682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.350 [2024-11-15 12:47:39.498833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.350 [2024-11-15 12:47:39.498863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.350 [2024-11-15 12:47:39.502963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.350 [2024-11-15 12:47:39.503092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.350 [2024-11-15 12:47:39.503122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.350 [2024-11-15 12:47:39.507318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.350 [2024-11-15 12:47:39.507449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.350 [2024-11-15 12:47:39.507479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.350 [2024-11-15 12:47:39.511605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.350 [2024-11-15 12:47:39.511776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.350 [2024-11-15 12:47:39.511807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.350 [2024-11-15 12:47:39.515861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.350 [2024-11-15 12:47:39.515987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.350 [2024-11-15 12:47:39.516019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.350 [2024-11-15 12:47:39.520151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.350 [2024-11-15 12:47:39.520310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.350 [2024-11-15 12:47:39.520340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.350 [2024-11-15 12:47:39.524751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.350 [2024-11-15 12:47:39.524916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.350 [2024-11-15 12:47:39.524947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.350 [2024-11-15 12:47:39.530000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.350 [2024-11-15 12:47:39.530122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.350 [2024-11-15 12:47:39.530152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.350 [2024-11-15 12:47:39.535813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b25550) with pdu=0x2000166ff3c8 00:25:59.350 [2024-11-15 12:47:39.535931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.350 [2024-11-15 12:47:39.535964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.350 6438.00 IOPS, 804.75 MiB/s 00:25:59.350 Latency(us) 00:25:59.350 [2024-11-15T11:47:39.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.350 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:59.350 nvme0n1 : 2.00 6434.96 804.37 0.00 0.00 2479.73 1856.85 11262.48 00:25:59.350 [2024-11-15T11:47:39.694Z] =================================================================================================================== 00:25:59.350 [2024-11-15T11:47:39.694Z] Total : 6434.96 804.37 0.00 0.00 2479.73 1856.85 11262.48 00:25:59.350 { 00:25:59.350 "results": [ 00:25:59.350 { 00:25:59.350 "job": "nvme0n1", 00:25:59.350 "core_mask": "0x2", 00:25:59.350 "workload": "randwrite", 00:25:59.350 "status": "finished", 00:25:59.350 "queue_depth": 16, 00:25:59.350 "io_size": 131072, 00:25:59.350 "runtime": 2.003432, 00:25:59.350 "iops": 6434.957612736544, 00:25:59.350 "mibps": 804.369701592068, 00:25:59.350 "io_failed": 0, 00:25:59.350 "io_timeout": 0, 00:25:59.350 "avg_latency_us": 2479.7318900035625, 00:25:59.350 "min_latency_us": 1856.8533333333332, 00:25:59.350 "max_latency_us": 11262.482962962962 00:25:59.350 } 00:25:59.350 ], 00:25:59.350 "core_count": 1 00:25:59.350 } 00:25:59.350 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:59.350 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:59.350 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:59.350 | .driver_specific 00:25:59.350 | .nvme_error 00:25:59.350 | .status_code 00:25:59.350 | .command_transient_transport_error' 00:25:59.350 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:59.608 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 416 > 0 )) 00:25:59.608 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1128888 00:25:59.608 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1128888 ']' 00:25:59.608 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1128888 00:25:59.608 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:59.608 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:59.608 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1128888 00:25:59.608 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:59.608 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:59.608 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1128888' 00:25:59.608 killing process with pid 1128888 00:25:59.608 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1128888 00:25:59.608 Received shutdown signal, test time was about 2.000000 seconds 00:25:59.608 00:25:59.608 Latency(us) 00:25:59.608 [2024-11-15T11:47:39.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.608 [2024-11-15T11:47:39.952Z] =================================================================================================================== 00:25:59.608 [2024-11-15T11:47:39.952Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.608 12:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1128888 00:25:59.865 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1127489 00:25:59.865 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1127489 ']' 00:25:59.865 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1127489 00:25:59.865 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:59.865 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:59.865 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1127489 00:25:59.865 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:59.865 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:59.865 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1127489' 00:25:59.865 killing process with pid 1127489 00:25:59.865 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1127489 00:25:59.865 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1127489 00:26:00.123 00:26:00.123 real 0m15.478s 00:26:00.123 user 0m29.995s 00:26:00.123 sys 0m4.734s 00:26:00.123 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:00.123 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:00.123 ************************************ 00:26:00.123 END TEST nvmf_digest_error 00:26:00.123 ************************************ 00:26:00.123 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:00.123 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:00.123 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:00.123 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:00.123 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:00.123 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:00.123 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:00.123 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:00.123 rmmod nvme_tcp 00:26:00.124 rmmod nvme_fabrics 00:26:00.124 rmmod nvme_keyring 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1127489 ']' 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1127489 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1127489 ']' 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1127489 00:26:00.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1127489) - No such process 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1127489 is not found' 00:26:00.383 Process with pid 1127489 is not found 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.383 12:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.295 12:47:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:02.295 00:26:02.295 real 0m35.634s 00:26:02.295 user 1m2.486s 00:26:02.295 sys 0m10.485s 00:26:02.295 12:47:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:02.295 12:47:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:02.295 ************************************ 00:26:02.295 END TEST nvmf_digest 00:26:02.295 ************************************ 00:26:02.295 12:47:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:02.295 12:47:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:02.295 12:47:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:02.295 12:47:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:02.295 12:47:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:02.295 12:47:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.295 12:47:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.295 ************************************ 00:26:02.295 START TEST nvmf_bdevperf 00:26:02.295 ************************************ 00:26:02.295 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:02.295 * Looking for test storage... 00:26:02.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:02.553 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:02.553 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:02.553 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:02.553 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:02.553 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.553 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.553 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.553 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.553 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.553 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.553 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.553 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:02.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.554 --rc genhtml_branch_coverage=1 00:26:02.554 --rc genhtml_function_coverage=1 00:26:02.554 --rc genhtml_legend=1 00:26:02.554 --rc geninfo_all_blocks=1 00:26:02.554 --rc geninfo_unexecuted_blocks=1 00:26:02.554 00:26:02.554 ' 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:02.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.554 --rc genhtml_branch_coverage=1 00:26:02.554 --rc genhtml_function_coverage=1 00:26:02.554 --rc genhtml_legend=1 00:26:02.554 --rc geninfo_all_blocks=1 00:26:02.554 --rc geninfo_unexecuted_blocks=1 00:26:02.554 00:26:02.554 ' 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:02.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.554 --rc genhtml_branch_coverage=1 00:26:02.554 --rc genhtml_function_coverage=1 00:26:02.554 --rc genhtml_legend=1 00:26:02.554 --rc geninfo_all_blocks=1 00:26:02.554 --rc geninfo_unexecuted_blocks=1 00:26:02.554 00:26:02.554 ' 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:02.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.554 --rc genhtml_branch_coverage=1 00:26:02.554 --rc genhtml_function_coverage=1 00:26:02.554 --rc genhtml_legend=1 00:26:02.554 --rc geninfo_all_blocks=1 00:26:02.554 --rc geninfo_unexecuted_blocks=1 00:26:02.554 00:26:02.554 ' 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.554 12:47:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.084 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:05.084 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:05.084 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:05.085 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:05.085 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:05.085 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:05.085 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:05.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:26:05.085 00:26:05.085 --- 10.0.0.2 ping statistics --- 00:26:05.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.085 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:05.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:26:05.085 00:26:05.085 --- 10.0.0.1 ping statistics --- 00:26:05.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.085 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:05.085 12:47:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:05.085 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1131251 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1131251 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1131251 ']' 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.086 [2024-11-15 12:47:45.054282] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:26:05.086 [2024-11-15 12:47:45.054356] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.086 [2024-11-15 12:47:45.124784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:05.086 [2024-11-15 12:47:45.184967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.086 [2024-11-15 12:47:45.185031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.086 [2024-11-15 12:47:45.185052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.086 [2024-11-15 12:47:45.185068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.086 [2024-11-15 12:47:45.185082] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.086 [2024-11-15 12:47:45.186642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:05.086 [2024-11-15 12:47:45.186689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:05.086 [2024-11-15 12:47:45.186692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.086 [2024-11-15 12:47:45.324445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.086 Malloc0 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:05.086 [2024-11-15 12:47:45.382159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:05.086 { 00:26:05.086 "params": { 00:26:05.086 "name": "Nvme$subsystem", 00:26:05.086 "trtype": "$TEST_TRANSPORT", 00:26:05.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.086 "adrfam": "ipv4", 00:26:05.086 "trsvcid": "$NVMF_PORT", 00:26:05.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.086 "hdgst": ${hdgst:-false}, 00:26:05.086 "ddgst": ${ddgst:-false} 00:26:05.086 }, 00:26:05.086 "method": "bdev_nvme_attach_controller" 00:26:05.086 } 00:26:05.086 EOF 00:26:05.086 )") 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:05.086 12:47:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:05.086 "params": { 00:26:05.086 "name": "Nvme1", 00:26:05.086 "trtype": "tcp", 00:26:05.086 "traddr": "10.0.0.2", 00:26:05.086 "adrfam": "ipv4", 00:26:05.086 "trsvcid": "4420", 00:26:05.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:05.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:05.086 "hdgst": false, 00:26:05.086 "ddgst": false 00:26:05.086 }, 00:26:05.086 "method": "bdev_nvme_attach_controller" 00:26:05.086 }' 00:26:05.344 [2024-11-15 12:47:45.431286] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:26:05.344 [2024-11-15 12:47:45.431362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131379 ] 00:26:05.344 [2024-11-15 12:47:45.498171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.344 [2024-11-15 12:47:45.559164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.602 Running I/O for 1 seconds... 00:26:06.976 8226.00 IOPS, 32.13 MiB/s 00:26:06.976 Latency(us) 00:26:06.976 [2024-11-15T11:47:47.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.976 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:06.976 Verification LBA range: start 0x0 length 0x4000 00:26:06.976 Nvme1n1 : 1.01 8305.80 32.44 0.00 0.00 15345.76 2633.58 16214.09 00:26:06.976 [2024-11-15T11:47:47.320Z] =================================================================================================================== 00:26:06.976 [2024-11-15T11:47:47.320Z] Total : 8305.80 32.44 0.00 0.00 15345.76 2633.58 16214.09 00:26:06.976 12:47:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1131539 00:26:06.976 12:47:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:06.976 12:47:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:06.976 12:47:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:06.976 12:47:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:06.976 12:47:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:06.976 12:47:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:06.976 12:47:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:06.976 { 00:26:06.976 "params": { 00:26:06.976 "name": "Nvme$subsystem", 00:26:06.976 "trtype": "$TEST_TRANSPORT", 00:26:06.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.977 "adrfam": "ipv4", 00:26:06.977 "trsvcid": "$NVMF_PORT", 00:26:06.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.977 "hdgst": ${hdgst:-false}, 00:26:06.977 "ddgst": ${ddgst:-false} 00:26:06.977 }, 00:26:06.977 "method": "bdev_nvme_attach_controller" 00:26:06.977 } 00:26:06.977 EOF 00:26:06.977 )") 00:26:06.977 12:47:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:06.977 12:47:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:06.977 12:47:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:06.977 12:47:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:06.977 "params": { 00:26:06.977 "name": "Nvme1", 00:26:06.977 "trtype": "tcp", 00:26:06.977 "traddr": "10.0.0.2", 00:26:06.977 "adrfam": "ipv4", 00:26:06.977 "trsvcid": "4420", 00:26:06.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:06.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:06.977 "hdgst": false, 00:26:06.977 "ddgst": false 00:26:06.977 }, 00:26:06.977 "method": "bdev_nvme_attach_controller" 00:26:06.977 }' 00:26:06.977 [2024-11-15 12:47:47.167617] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:26:06.977 [2024-11-15 12:47:47.167714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131539 ] 00:26:06.977 [2024-11-15 12:47:47.234933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.977 [2024-11-15 12:47:47.291889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.235 Running I/O for 15 seconds... 00:26:09.543 8538.00 IOPS, 33.35 MiB/s [2024-11-15T11:47:50.147Z] 8604.50 IOPS, 33.61 MiB/s [2024-11-15T11:47:50.147Z] 12:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1131251 00:26:09.803 12:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:09.803 [2024-11-15 12:47:50.137792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.137841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.137883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.137903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.137920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.137938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.137952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.137970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.137987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.138980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.138994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.139024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.139037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.139050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.139063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.139076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.139089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.804 [2024-11-15 12:47:50.139106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.804 [2024-11-15 12:47:50.139119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.805 [2024-11-15 12:47:50.139145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.805 [2024-11-15 12:47:50.139171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.805 [2024-11-15 12:47:50.139213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.805 [2024-11-15 12:47:50.139239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.139979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.139994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.140023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.140038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.140052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.140066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.140079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.140093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.140106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.140121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.140134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.140148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.805 [2024-11-15 12:47:50.140161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.140176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.805 [2024-11-15 12:47:50.140189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.140203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.805 [2024-11-15 12:47:50.140216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.140234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.805 [2024-11-15 12:47:50.140248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.140263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.805 [2024-11-15 12:47:50.140293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.805 [2024-11-15 12:47:50.140308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.806 [2024-11-15 12:47:50.140322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.806 [2024-11-15 12:47:50.140350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.806 [2024-11-15 12:47:50.140378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.806 [2024-11-15 12:47:50.140422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.140983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.140997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.806 [2024-11-15 12:47:50.141566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.806 [2024-11-15 12:47:50.141579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.807 [2024-11-15 12:47:50.141591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.807 [2024-11-15 12:47:50.141604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.807 [2024-11-15 12:47:50.141617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.807 [2024-11-15 12:47:50.141630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.807 [2024-11-15 12:47:50.141642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.807 [2024-11-15 12:47:50.141655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.807 [2024-11-15 12:47:50.141668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.807 [2024-11-15 12:47:50.141681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.807 [2024-11-15 12:47:50.141693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.807 [2024-11-15 12:47:50.141731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.807 [2024-11-15 12:47:50.141748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.807 [2024-11-15 12:47:50.141763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.807 [2024-11-15 12:47:50.141779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.807 [2024-11-15 12:47:50.141794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:09.807 [2024-11-15 12:47:50.141807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.807 [2024-11-15 12:47:50.141821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122bbb0 is same with the state(6) to be set 00:26:09.807 [2024-11-15 12:47:50.141841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:09.807 [2024-11-15 12:47:50.141860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:09.807 [2024-11-15 12:47:50.141872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49064 len:8 PRP1 0x0 PRP2 0x0 00:26:09.807 [2024-11-15 12:47:50.141884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.807 [2024-11-15 12:47:50.142032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.807 [2024-11-15 12:47:50.142055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.807 [2024-11-15 12:47:50.142070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.807 [2024-11-15 12:47:50.142098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.807 [2024-11-15 12:47:50.142113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.807 [2024-11-15 12:47:50.142125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.807 [2024-11-15 12:47:50.142140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.807 [2024-11-15 12:47:50.142153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.807 [2024-11-15 12:47:50.142165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.066 [2024-11-15 12:47:50.145788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.066 [2024-11-15 12:47:50.145826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.066 [2024-11-15 12:47:50.146483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-11-15 12:47:50.146558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.066 [2024-11-15 12:47:50.146592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.066 [2024-11-15 12:47:50.146832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.066 [2024-11-15 12:47:50.147080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.066 [2024-11-15 12:47:50.147099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.066 [2024-11-15 12:47:50.147114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.066 [2024-11-15 12:47:50.147129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.066 [2024-11-15 12:47:50.159450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.066 [2024-11-15 12:47:50.159876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-11-15 12:47:50.159905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.066 [2024-11-15 12:47:50.159921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.066 [2024-11-15 12:47:50.160160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.066 [2024-11-15 12:47:50.160373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.066 [2024-11-15 12:47:50.160392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.066 [2024-11-15 12:47:50.160404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.066 [2024-11-15 12:47:50.160415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.066 [2024-11-15 12:47:50.172712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.066 [2024-11-15 12:47:50.173123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.066 [2024-11-15 12:47:50.173150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.067 [2024-11-15 12:47:50.173165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.067 [2024-11-15 12:47:50.173380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.067 [2024-11-15 12:47:50.173588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.067 [2024-11-15 12:47:50.173607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.067 [2024-11-15 12:47:50.173618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.067 [2024-11-15 12:47:50.173629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.067 [2024-11-15 12:47:50.186079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.067 [2024-11-15 12:47:50.186467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-11-15 12:47:50.186496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.067 [2024-11-15 12:47:50.186511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.067 [2024-11-15 12:47:50.186763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.067 [2024-11-15 12:47:50.186982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.067 [2024-11-15 12:47:50.187016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.067 [2024-11-15 12:47:50.187029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.067 [2024-11-15 12:47:50.187041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.067 [2024-11-15 12:47:50.199437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.067 [2024-11-15 12:47:50.199760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-11-15 12:47:50.199787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.067 [2024-11-15 12:47:50.199802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.067 [2024-11-15 12:47:50.200020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.067 [2024-11-15 12:47:50.200230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.067 [2024-11-15 12:47:50.200248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.067 [2024-11-15 12:47:50.200265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.067 [2024-11-15 12:47:50.200276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.067 [2024-11-15 12:47:50.212592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.067 [2024-11-15 12:47:50.212985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-11-15 12:47:50.213014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.067 [2024-11-15 12:47:50.213030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.067 [2024-11-15 12:47:50.213271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.067 [2024-11-15 12:47:50.213479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.067 [2024-11-15 12:47:50.213498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.067 [2024-11-15 12:47:50.213510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.067 [2024-11-15 12:47:50.213521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.067 [2024-11-15 12:47:50.225876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.067 [2024-11-15 12:47:50.226252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-11-15 12:47:50.226279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.067 [2024-11-15 12:47:50.226294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.067 [2024-11-15 12:47:50.226530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.067 [2024-11-15 12:47:50.226766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.067 [2024-11-15 12:47:50.226786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.067 [2024-11-15 12:47:50.226798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.067 [2024-11-15 12:47:50.226809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.067 [2024-11-15 12:47:50.239226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.067 [2024-11-15 12:47:50.239641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-11-15 12:47:50.239688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.067 [2024-11-15 12:47:50.239706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.067 [2024-11-15 12:47:50.239944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.067 [2024-11-15 12:47:50.240177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.067 [2024-11-15 12:47:50.240197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.067 [2024-11-15 12:47:50.240209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.067 [2024-11-15 12:47:50.240221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.067 [2024-11-15 12:47:50.252675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.067 [2024-11-15 12:47:50.253061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-11-15 12:47:50.253103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.067 [2024-11-15 12:47:50.253118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.067 [2024-11-15 12:47:50.253360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.067 [2024-11-15 12:47:50.253594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.067 [2024-11-15 12:47:50.253612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.067 [2024-11-15 12:47:50.253625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.067 [2024-11-15 12:47:50.253636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.067 [2024-11-15 12:47:50.266097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.067 [2024-11-15 12:47:50.266509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-11-15 12:47:50.266564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.067 [2024-11-15 12:47:50.266580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.067 [2024-11-15 12:47:50.266833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.067 [2024-11-15 12:47:50.267032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.067 [2024-11-15 12:47:50.267050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.067 [2024-11-15 12:47:50.267063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.067 [2024-11-15 12:47:50.267074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.067 [2024-11-15 12:47:50.279662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.067 [2024-11-15 12:47:50.280058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-11-15 12:47:50.280100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.067 [2024-11-15 12:47:50.280114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.067 [2024-11-15 12:47:50.280362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.067 [2024-11-15 12:47:50.280578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.067 [2024-11-15 12:47:50.280597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.067 [2024-11-15 12:47:50.280610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.067 [2024-11-15 12:47:50.280622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.067 [2024-11-15 12:47:50.293189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.067 [2024-11-15 12:47:50.293561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.067 [2024-11-15 12:47:50.293589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.067 [2024-11-15 12:47:50.293610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.067 [2024-11-15 12:47:50.293851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.067 [2024-11-15 12:47:50.294094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.067 [2024-11-15 12:47:50.294113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.067 [2024-11-15 12:47:50.294125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.067 [2024-11-15 12:47:50.294136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.067 [2024-11-15 12:47:50.306537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.067 [2024-11-15 12:47:50.306938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-11-15 12:47:50.306966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.068 [2024-11-15 12:47:50.306982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.068 [2024-11-15 12:47:50.307210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.068 [2024-11-15 12:47:50.307424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.068 [2024-11-15 12:47:50.307443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.068 [2024-11-15 12:47:50.307454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.068 [2024-11-15 12:47:50.307465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.068 [2024-11-15 12:47:50.320002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.068 [2024-11-15 12:47:50.320386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-11-15 12:47:50.320415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.068 [2024-11-15 12:47:50.320430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.068 [2024-11-15 12:47:50.320658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.068 [2024-11-15 12:47:50.320907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.068 [2024-11-15 12:47:50.320928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.068 [2024-11-15 12:47:50.320941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.068 [2024-11-15 12:47:50.320953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.068 [2024-11-15 12:47:50.333294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.068 [2024-11-15 12:47:50.333652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-11-15 12:47:50.333679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.068 [2024-11-15 12:47:50.333694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.068 [2024-11-15 12:47:50.333960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.068 [2024-11-15 12:47:50.334199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.068 [2024-11-15 12:47:50.334219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.068 [2024-11-15 12:47:50.334232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.068 [2024-11-15 12:47:50.334244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.068 [2024-11-15 12:47:50.346604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.068 [2024-11-15 12:47:50.347049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-11-15 12:47:50.347077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.068 [2024-11-15 12:47:50.347093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.068 [2024-11-15 12:47:50.347334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.068 [2024-11-15 12:47:50.347549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.068 [2024-11-15 12:47:50.347567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.068 [2024-11-15 12:47:50.347579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.068 [2024-11-15 12:47:50.347591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.068 [2024-11-15 12:47:50.359969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.068 [2024-11-15 12:47:50.360325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-11-15 12:47:50.360352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.068 [2024-11-15 12:47:50.360367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.068 [2024-11-15 12:47:50.360595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.068 [2024-11-15 12:47:50.360858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.068 [2024-11-15 12:47:50.360879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.068 [2024-11-15 12:47:50.360892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.068 [2024-11-15 12:47:50.360904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.068 [2024-11-15 12:47:50.373453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.068 [2024-11-15 12:47:50.373838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-11-15 12:47:50.373866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.068 [2024-11-15 12:47:50.373881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.068 [2024-11-15 12:47:50.374110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.068 [2024-11-15 12:47:50.374341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.068 [2024-11-15 12:47:50.374361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.068 [2024-11-15 12:47:50.374378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.068 [2024-11-15 12:47:50.374391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.068 [2024-11-15 12:47:50.386902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.068 [2024-11-15 12:47:50.387275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-11-15 12:47:50.387303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.068 [2024-11-15 12:47:50.387318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.068 [2024-11-15 12:47:50.387545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.068 [2024-11-15 12:47:50.387794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.068 [2024-11-15 12:47:50.387830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.068 [2024-11-15 12:47:50.387842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.068 [2024-11-15 12:47:50.387854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.068 [2024-11-15 12:47:50.400472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.068 [2024-11-15 12:47:50.400800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.068 [2024-11-15 12:47:50.400828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.068 [2024-11-15 12:47:50.400843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.068 [2024-11-15 12:47:50.401056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.068 [2024-11-15 12:47:50.401273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.068 [2024-11-15 12:47:50.401293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.068 [2024-11-15 12:47:50.401307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.068 [2024-11-15 12:47:50.401319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.328 [2024-11-15 12:47:50.414100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.328 [2024-11-15 12:47:50.414527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.328 [2024-11-15 12:47:50.414570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.328 [2024-11-15 12:47:50.414585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.328 [2024-11-15 12:47:50.414823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.328 [2024-11-15 12:47:50.415053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.328 [2024-11-15 12:47:50.415071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.328 [2024-11-15 12:47:50.415083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.328 [2024-11-15 12:47:50.415095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.328 [2024-11-15 12:47:50.427344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.328 [2024-11-15 12:47:50.427780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.328 [2024-11-15 12:47:50.427810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.328 [2024-11-15 12:47:50.427825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.328 [2024-11-15 12:47:50.428066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.328 [2024-11-15 12:47:50.428263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.328 [2024-11-15 12:47:50.428282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.328 [2024-11-15 12:47:50.428294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.328 [2024-11-15 12:47:50.428305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.328 [2024-11-15 12:47:50.440514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.328 [2024-11-15 12:47:50.440908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.328 [2024-11-15 12:47:50.440936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.328 [2024-11-15 12:47:50.440952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.328 [2024-11-15 12:47:50.441194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.328 [2024-11-15 12:47:50.441392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.328 [2024-11-15 12:47:50.441411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.328 [2024-11-15 12:47:50.441423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.328 [2024-11-15 12:47:50.441434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.328 [2024-11-15 12:47:50.453703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.328 [2024-11-15 12:47:50.454105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.328 [2024-11-15 12:47:50.454147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.328 [2024-11-15 12:47:50.454164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.328 [2024-11-15 12:47:50.454406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.328 [2024-11-15 12:47:50.454604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.328 [2024-11-15 12:47:50.454623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.328 [2024-11-15 12:47:50.454635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.328 [2024-11-15 12:47:50.454646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.328 [2024-11-15 12:47:50.467007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.328 [2024-11-15 12:47:50.467333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.328 [2024-11-15 12:47:50.467359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.328 [2024-11-15 12:47:50.467379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.328 [2024-11-15 12:47:50.467600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.328 [2024-11-15 12:47:50.467842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.328 [2024-11-15 12:47:50.467862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.328 [2024-11-15 12:47:50.467875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.328 [2024-11-15 12:47:50.467887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.328 [2024-11-15 12:47:50.480232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.328 [2024-11-15 12:47:50.480616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.328 [2024-11-15 12:47:50.480658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.328 [2024-11-15 12:47:50.480675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.328 [2024-11-15 12:47:50.480917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.328 [2024-11-15 12:47:50.481150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.328 [2024-11-15 12:47:50.481169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.328 [2024-11-15 12:47:50.481181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.328 [2024-11-15 12:47:50.481192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.328 [2024-11-15 12:47:50.493440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.328 [2024-11-15 12:47:50.493872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.328 [2024-11-15 12:47:50.493901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.329 [2024-11-15 12:47:50.493917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.329 [2024-11-15 12:47:50.494159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.329 [2024-11-15 12:47:50.494357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.329 [2024-11-15 12:47:50.494375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.329 [2024-11-15 12:47:50.494388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.329 [2024-11-15 12:47:50.494399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.329 [2024-11-15 12:47:50.506630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.329 [2024-11-15 12:47:50.507006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.329 [2024-11-15 12:47:50.507049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.329 [2024-11-15 12:47:50.507064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.329 [2024-11-15 12:47:50.507319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.329 [2024-11-15 12:47:50.507538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.329 [2024-11-15 12:47:50.507557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.329 [2024-11-15 12:47:50.507569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.329 [2024-11-15 12:47:50.507580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.329 [2024-11-15 12:47:50.519806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.329 [2024-11-15 12:47:50.520186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.329 [2024-11-15 12:47:50.520227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.329 [2024-11-15 12:47:50.520243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.329 [2024-11-15 12:47:50.520470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.329 [2024-11-15 12:47:50.520684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.329 [2024-11-15 12:47:50.520702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.329 [2024-11-15 12:47:50.520715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.329 [2024-11-15 12:47:50.520751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.329 7488.33 IOPS, 29.25 MiB/s [2024-11-15T11:47:50.673Z] [2024-11-15 12:47:50.532985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.329 [2024-11-15 12:47:50.533347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.329 [2024-11-15 12:47:50.533376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.329 [2024-11-15 12:47:50.533392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.329 [2024-11-15 12:47:50.533620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.329 [2024-11-15 12:47:50.533863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.329 [2024-11-15 12:47:50.533892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.329 [2024-11-15 12:47:50.533905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.329 [2024-11-15 12:47:50.533918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.329 [2024-11-15 12:47:50.546315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.329 [2024-11-15 12:47:50.546746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.329 [2024-11-15 12:47:50.546774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.329 [2024-11-15 12:47:50.546790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.329 [2024-11-15 12:47:50.547030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.329 [2024-11-15 12:47:50.547228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.329 [2024-11-15 12:47:50.547246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.329 [2024-11-15 12:47:50.547264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.329 [2024-11-15 12:47:50.547276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.329 [2024-11-15 12:47:50.559577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.329 [2024-11-15 12:47:50.559925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.329 [2024-11-15 12:47:50.559954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.329 [2024-11-15 12:47:50.559969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.329 [2024-11-15 12:47:50.560196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.329 [2024-11-15 12:47:50.560409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.329 [2024-11-15 12:47:50.560428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.329 [2024-11-15 12:47:50.560441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.329 [2024-11-15 12:47:50.560452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.329 [2024-11-15 12:47:50.572939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.329 [2024-11-15 12:47:50.573361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.329 [2024-11-15 12:47:50.573389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.329 [2024-11-15 12:47:50.573404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.329 [2024-11-15 12:47:50.573645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.329 [2024-11-15 12:47:50.573871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.329 [2024-11-15 12:47:50.573892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.329 [2024-11-15 12:47:50.573905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.329 [2024-11-15 12:47:50.573916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.329 [2024-11-15 12:47:50.586162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.329 [2024-11-15 12:47:50.586536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.329 [2024-11-15 12:47:50.586578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.329 [2024-11-15 12:47:50.586593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.329 [2024-11-15 12:47:50.586856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.329 [2024-11-15 12:47:50.587076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.329 [2024-11-15 12:47:50.587094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.329 [2024-11-15 12:47:50.587107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.329 [2024-11-15 12:47:50.587118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.329 [2024-11-15 12:47:50.599508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.329 [2024-11-15 12:47:50.599943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.329 [2024-11-15 12:47:50.599971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.329 [2024-11-15 12:47:50.599986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.329 [2024-11-15 12:47:50.600227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.329 [2024-11-15 12:47:50.600424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.329 [2024-11-15 12:47:50.600443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.329 [2024-11-15 12:47:50.600455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.329 [2024-11-15 12:47:50.600467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.329 [2024-11-15 12:47:50.612842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.329 [2024-11-15 12:47:50.613216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.329 [2024-11-15 12:47:50.613257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.329 [2024-11-15 12:47:50.613273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.329 [2024-11-15 12:47:50.613501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.329 [2024-11-15 12:47:50.613715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.329 [2024-11-15 12:47:50.613757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.329 [2024-11-15 12:47:50.613769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.329 [2024-11-15 12:47:50.613781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.330 [2024-11-15 12:47:50.625988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.330 [2024-11-15 12:47:50.626391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.330 [2024-11-15 12:47:50.626418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.330 [2024-11-15 12:47:50.626433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.330 [2024-11-15 12:47:50.626654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.330 [2024-11-15 12:47:50.626899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.330 [2024-11-15 12:47:50.626919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.330 [2024-11-15 12:47:50.626932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.330 [2024-11-15 12:47:50.626943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.330 [2024-11-15 12:47:50.639164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.330 [2024-11-15 12:47:50.639549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.330 [2024-11-15 12:47:50.639578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.330 [2024-11-15 12:47:50.639599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.330 [2024-11-15 12:47:50.639824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.330 [2024-11-15 12:47:50.640042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.330 [2024-11-15 12:47:50.640063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.330 [2024-11-15 12:47:50.640076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.330 [2024-11-15 12:47:50.640089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.330 [2024-11-15 12:47:50.652413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.330 [2024-11-15 12:47:50.652786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.330 [2024-11-15 12:47:50.652813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.330 [2024-11-15 12:47:50.652828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.330 [2024-11-15 12:47:50.653054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.330 [2024-11-15 12:47:50.653270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.330 [2024-11-15 12:47:50.653289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.330 [2024-11-15 12:47:50.653301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.330 [2024-11-15 12:47:50.653312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.330 [2024-11-15 12:47:50.665670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.330 [2024-11-15 12:47:50.666054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.330 [2024-11-15 12:47:50.666082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.330 [2024-11-15 12:47:50.666098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.330 [2024-11-15 12:47:50.666311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.330 [2024-11-15 12:47:50.666544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.330 [2024-11-15 12:47:50.666564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.330 [2024-11-15 12:47:50.666592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.330 [2024-11-15 12:47:50.666604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.590 [2024-11-15 12:47:50.679280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.590 [2024-11-15 12:47:50.679670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.590 [2024-11-15 12:47:50.679712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.590 [2024-11-15 12:47:50.679739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.590 [2024-11-15 12:47:50.679968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.590 [2024-11-15 12:47:50.680204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.590 [2024-11-15 12:47:50.680223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.590 [2024-11-15 12:47:50.680235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.590 [2024-11-15 12:47:50.680247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.590 [2024-11-15 12:47:50.692545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.590 [2024-11-15 12:47:50.692947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.590 [2024-11-15 12:47:50.692976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.590 [2024-11-15 12:47:50.692991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.590 [2024-11-15 12:47:50.693218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.590 [2024-11-15 12:47:50.693433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.590 [2024-11-15 12:47:50.693452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.590 [2024-11-15 12:47:50.693463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.590 [2024-11-15 12:47:50.693474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.590 [2024-11-15 12:47:50.705796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.590 [2024-11-15 12:47:50.706185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.590 [2024-11-15 12:47:50.706211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.590 [2024-11-15 12:47:50.706226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.590 [2024-11-15 12:47:50.706460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.590 [2024-11-15 12:47:50.706658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.590 [2024-11-15 12:47:50.706677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.590 [2024-11-15 12:47:50.706690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.590 [2024-11-15 12:47:50.706701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.590 [2024-11-15 12:47:50.719152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.590 [2024-11-15 12:47:50.719589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.590 [2024-11-15 12:47:50.719617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.590 [2024-11-15 12:47:50.719633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.590 [2024-11-15 12:47:50.719870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.590 [2024-11-15 12:47:50.720105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.590 [2024-11-15 12:47:50.720124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.590 [2024-11-15 12:47:50.720141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.590 [2024-11-15 12:47:50.720153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.590 [2024-11-15 12:47:50.732294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.590 [2024-11-15 12:47:50.732686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.590 [2024-11-15 12:47:50.732713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.590 [2024-11-15 12:47:50.732737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.590 [2024-11-15 12:47:50.732959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.590 [2024-11-15 12:47:50.733174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.590 [2024-11-15 12:47:50.733193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.590 [2024-11-15 12:47:50.733205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.590 [2024-11-15 12:47:50.733216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.590 [2024-11-15 12:47:50.745659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.590 [2024-11-15 12:47:50.746031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.590 [2024-11-15 12:47:50.746060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.590 [2024-11-15 12:47:50.746075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.590 [2024-11-15 12:47:50.746303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.590 [2024-11-15 12:47:50.746535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.590 [2024-11-15 12:47:50.746554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.590 [2024-11-15 12:47:50.746566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.590 [2024-11-15 12:47:50.746577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.590 [2024-11-15 12:47:50.759100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.590 [2024-11-15 12:47:50.759539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.590 [2024-11-15 12:47:50.759566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.590 [2024-11-15 12:47:50.759581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.590 [2024-11-15 12:47:50.759818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.590 [2024-11-15 12:47:50.760051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.590 [2024-11-15 12:47:50.760071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.590 [2024-11-15 12:47:50.760083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.590 [2024-11-15 12:47:50.760094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.590 [2024-11-15 12:47:50.772311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.590 [2024-11-15 12:47:50.772681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.590 [2024-11-15 12:47:50.772731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.590 [2024-11-15 12:47:50.772749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.590 [2024-11-15 12:47:50.773000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.590 [2024-11-15 12:47:50.773198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.590 [2024-11-15 12:47:50.773217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.590 [2024-11-15 12:47:50.773229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.590 [2024-11-15 12:47:50.773240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.590 [2024-11-15 12:47:50.785661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.590 [2024-11-15 12:47:50.786053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.590 [2024-11-15 12:47:50.786081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.590 [2024-11-15 12:47:50.786097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.590 [2024-11-15 12:47:50.786336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.590 [2024-11-15 12:47:50.786551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.590 [2024-11-15 12:47:50.786569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.590 [2024-11-15 12:47:50.786581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.591 [2024-11-15 12:47:50.786593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.591 [2024-11-15 12:47:50.798874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.591 [2024-11-15 12:47:50.799228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.591 [2024-11-15 12:47:50.799256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.591 [2024-11-15 12:47:50.799271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.591 [2024-11-15 12:47:50.799511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.591 [2024-11-15 12:47:50.799749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.591 [2024-11-15 12:47:50.799769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.591 [2024-11-15 12:47:50.799781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.591 [2024-11-15 12:47:50.799793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.591 [2024-11-15 12:47:50.812163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.591 [2024-11-15 12:47:50.812468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.591 [2024-11-15 12:47:50.812509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.591 [2024-11-15 12:47:50.812531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.591 [2024-11-15 12:47:50.812762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.591 [2024-11-15 12:47:50.812967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.591 [2024-11-15 12:47:50.812986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.591 [2024-11-15 12:47:50.812999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.591 [2024-11-15 12:47:50.813010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.591 [2024-11-15 12:47:50.825338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.591 [2024-11-15 12:47:50.825713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.591 [2024-11-15 12:47:50.825748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.591 [2024-11-15 12:47:50.825763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.591 [2024-11-15 12:47:50.825991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.591 [2024-11-15 12:47:50.826205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.591 [2024-11-15 12:47:50.826224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.591 [2024-11-15 12:47:50.826236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.591 [2024-11-15 12:47:50.826247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.591 [2024-11-15 12:47:50.838610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.591 [2024-11-15 12:47:50.838959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.591 [2024-11-15 12:47:50.838987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.591 [2024-11-15 12:47:50.839002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.591 [2024-11-15 12:47:50.839231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.591 [2024-11-15 12:47:50.839444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.591 [2024-11-15 12:47:50.839463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.591 [2024-11-15 12:47:50.839475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.591 [2024-11-15 12:47:50.839487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.591 [2024-11-15 12:47:50.851902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.591 [2024-11-15 12:47:50.852267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.591 [2024-11-15 12:47:50.852309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.591 [2024-11-15 12:47:50.852325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.591 [2024-11-15 12:47:50.852578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.591 [2024-11-15 12:47:50.852826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.591 [2024-11-15 12:47:50.852846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.591 [2024-11-15 12:47:50.852859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.591 [2024-11-15 12:47:50.852871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.591 [2024-11-15 12:47:50.865241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.591 [2024-11-15 12:47:50.865633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.591 [2024-11-15 12:47:50.865674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.591 [2024-11-15 12:47:50.865691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.591 [2024-11-15 12:47:50.865942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.591 [2024-11-15 12:47:50.866142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.591 [2024-11-15 12:47:50.866161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.591 [2024-11-15 12:47:50.866173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.591 [2024-11-15 12:47:50.866184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.591 [2024-11-15 12:47:50.878674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.591 [2024-11-15 12:47:50.879066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.591 [2024-11-15 12:47:50.879109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.591 [2024-11-15 12:47:50.879125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.591 [2024-11-15 12:47:50.879353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.591 [2024-11-15 12:47:50.879567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.591 [2024-11-15 12:47:50.879586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.591 [2024-11-15 12:47:50.879598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.591 [2024-11-15 12:47:50.879610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.591 [2024-11-15 12:47:50.892017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.591 [2024-11-15 12:47:50.892389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.591 [2024-11-15 12:47:50.892416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.591 [2024-11-15 12:47:50.892432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.591 [2024-11-15 12:47:50.892661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.591 [2024-11-15 12:47:50.892910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.591 [2024-11-15 12:47:50.892933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.591 [2024-11-15 12:47:50.892952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.591 [2024-11-15 12:47:50.892965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.591 [2024-11-15 12:47:50.905277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.591 [2024-11-15 12:47:50.905582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.591 [2024-11-15 12:47:50.905623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.591 [2024-11-15 12:47:50.905639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.591 [2024-11-15 12:47:50.905887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.591 [2024-11-15 12:47:50.906139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.591 [2024-11-15 12:47:50.906157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.591 [2024-11-15 12:47:50.906169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.591 [2024-11-15 12:47:50.906180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.591 [2024-11-15 12:47:50.918606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.591 [2024-11-15 12:47:50.918949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.591 [2024-11-15 12:47:50.918976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.591 [2024-11-15 12:47:50.918992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.591 [2024-11-15 12:47:50.919212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.591 [2024-11-15 12:47:50.919428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.591 [2024-11-15 12:47:50.919447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.591 [2024-11-15 12:47:50.919459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.592 [2024-11-15 12:47:50.919470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.852 [2024-11-15 12:47:50.932310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.852 [2024-11-15 12:47:50.932695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.852 [2024-11-15 12:47:50.932731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.852 [2024-11-15 12:47:50.932748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.852 [2024-11-15 12:47:50.932976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.852 [2024-11-15 12:47:50.933195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.852 [2024-11-15 12:47:50.933214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.852 [2024-11-15 12:47:50.933227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.852 [2024-11-15 12:47:50.933238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.852 [2024-11-15 12:47:50.945602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.852 [2024-11-15 12:47:50.945994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.852 [2024-11-15 12:47:50.946036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.852 [2024-11-15 12:47:50.946051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.852 [2024-11-15 12:47:50.946285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.852 [2024-11-15 12:47:50.946499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.852 [2024-11-15 12:47:50.946518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.852 [2024-11-15 12:47:50.946530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.852 [2024-11-15 12:47:50.946542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.852 [2024-11-15 12:47:50.958873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.852 [2024-11-15 12:47:50.959253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.852 [2024-11-15 12:47:50.959296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.852 [2024-11-15 12:47:50.959312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.852 [2024-11-15 12:47:50.959566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.852 [2024-11-15 12:47:50.959773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.852 [2024-11-15 12:47:50.959792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.852 [2024-11-15 12:47:50.959804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.852 [2024-11-15 12:47:50.959816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.852 [2024-11-15 12:47:50.972158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.852 [2024-11-15 12:47:50.972547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.852 [2024-11-15 12:47:50.972574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.852 [2024-11-15 12:47:50.972608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.852 [2024-11-15 12:47:50.972859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.852 [2024-11-15 12:47:50.973058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.852 [2024-11-15 12:47:50.973078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.852 [2024-11-15 12:47:50.973090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.852 [2024-11-15 12:47:50.973102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.852 [2024-11-15 12:47:50.985701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.852 [2024-11-15 12:47:50.986161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.852 [2024-11-15 12:47:50.986190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.852 [2024-11-15 12:47:50.986211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.852 [2024-11-15 12:47:50.986454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.852 [2024-11-15 12:47:50.986652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.852 [2024-11-15 12:47:50.986671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.853 [2024-11-15 12:47:50.986683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.853 [2024-11-15 12:47:50.986695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.853 [2024-11-15 12:47:50.999177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.853 [2024-11-15 12:47:50.999551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.853 [2024-11-15 12:47:50.999595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.853 [2024-11-15 12:47:50.999610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.853 [2024-11-15 12:47:50.999890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.853 [2024-11-15 12:47:51.000089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.853 [2024-11-15 12:47:51.000108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.853 [2024-11-15 12:47:51.000121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.853 [2024-11-15 12:47:51.000132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.853 [2024-11-15 12:47:51.012510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.853 [2024-11-15 12:47:51.012907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.853 [2024-11-15 12:47:51.012936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.853 [2024-11-15 12:47:51.012952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.853 [2024-11-15 12:47:51.013193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.853 [2024-11-15 12:47:51.013406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.853 [2024-11-15 12:47:51.013432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.853 [2024-11-15 12:47:51.013444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.853 [2024-11-15 12:47:51.013455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.853 [2024-11-15 12:47:51.025758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.853 [2024-11-15 12:47:51.026184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.853 [2024-11-15 12:47:51.026212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.853 [2024-11-15 12:47:51.026228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.853 [2024-11-15 12:47:51.026469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.853 [2024-11-15 12:47:51.026688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.853 [2024-11-15 12:47:51.026707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.853 [2024-11-15 12:47:51.026728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.853 [2024-11-15 12:47:51.026742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.853 [2024-11-15 12:47:51.039159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.853 [2024-11-15 12:47:51.039533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.853 [2024-11-15 12:47:51.039570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.853 [2024-11-15 12:47:51.039586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.853 [2024-11-15 12:47:51.039838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.853 [2024-11-15 12:47:51.040042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.853 [2024-11-15 12:47:51.040061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.853 [2024-11-15 12:47:51.040073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.853 [2024-11-15 12:47:51.040085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.853 [2024-11-15 12:47:51.052554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.853 [2024-11-15 12:47:51.052974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.853 [2024-11-15 12:47:51.053006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.853 [2024-11-15 12:47:51.053022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.853 [2024-11-15 12:47:51.053251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.853 [2024-11-15 12:47:51.053463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.853 [2024-11-15 12:47:51.053482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.853 [2024-11-15 12:47:51.053494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.853 [2024-11-15 12:47:51.053506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.853 [2024-11-15 12:47:51.065931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.853 [2024-11-15 12:47:51.066320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.853 [2024-11-15 12:47:51.066363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.853 [2024-11-15 12:47:51.066379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.853 [2024-11-15 12:47:51.066607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.853 [2024-11-15 12:47:51.066855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.853 [2024-11-15 12:47:51.066876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.853 [2024-11-15 12:47:51.066894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.853 [2024-11-15 12:47:51.066907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.853 [2024-11-15 12:47:51.079334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.853 [2024-11-15 12:47:51.079732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.853 [2024-11-15 12:47:51.079775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.853 [2024-11-15 12:47:51.079791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.853 [2024-11-15 12:47:51.080018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.853 [2024-11-15 12:47:51.080232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.853 [2024-11-15 12:47:51.080251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.853 [2024-11-15 12:47:51.080264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.853 [2024-11-15 12:47:51.080275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.853 [2024-11-15 12:47:51.092692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.853 [2024-11-15 12:47:51.093130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.853 [2024-11-15 12:47:51.093158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.853 [2024-11-15 12:47:51.093173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.853 [2024-11-15 12:47:51.093413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.853 [2024-11-15 12:47:51.093630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.853 [2024-11-15 12:47:51.093650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.853 [2024-11-15 12:47:51.093663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.853 [2024-11-15 12:47:51.093674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.853 [2024-11-15 12:47:51.105962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.853 [2024-11-15 12:47:51.106362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.853 [2024-11-15 12:47:51.106390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.853 [2024-11-15 12:47:51.106406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.853 [2024-11-15 12:47:51.106635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.853 [2024-11-15 12:47:51.106863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.853 [2024-11-15 12:47:51.106883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.853 [2024-11-15 12:47:51.106896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.853 [2024-11-15 12:47:51.106908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.853 [2024-11-15 12:47:51.119356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.853 [2024-11-15 12:47:51.119757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.853 [2024-11-15 12:47:51.119796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.853 [2024-11-15 12:47:51.119812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.853 [2024-11-15 12:47:51.120040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.853 [2024-11-15 12:47:51.120255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.853 [2024-11-15 12:47:51.120273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.854 [2024-11-15 12:47:51.120285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.854 [2024-11-15 12:47:51.120297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.854 [2024-11-15 12:47:51.132716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.854 [2024-11-15 12:47:51.133170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.854 [2024-11-15 12:47:51.133199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.854 [2024-11-15 12:47:51.133214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.854 [2024-11-15 12:47:51.133456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.854 [2024-11-15 12:47:51.133671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.854 [2024-11-15 12:47:51.133690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.854 [2024-11-15 12:47:51.133724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.854 [2024-11-15 12:47:51.133739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.854 [2024-11-15 12:47:51.146402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.854 [2024-11-15 12:47:51.146734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.854 [2024-11-15 12:47:51.146762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.854 [2024-11-15 12:47:51.146777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.854 [2024-11-15 12:47:51.146990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.854 [2024-11-15 12:47:51.147208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.854 [2024-11-15 12:47:51.147228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.854 [2024-11-15 12:47:51.147242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.854 [2024-11-15 12:47:51.147255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.854 [2024-11-15 12:47:51.159831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.854 [2024-11-15 12:47:51.160257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.854 [2024-11-15 12:47:51.160299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.854 [2024-11-15 12:47:51.160321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.854 [2024-11-15 12:47:51.160563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.854 [2024-11-15 12:47:51.161053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.854 [2024-11-15 12:47:51.161087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.854 [2024-11-15 12:47:51.161099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.854 [2024-11-15 12:47:51.161111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.854 [2024-11-15 12:47:51.173064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.854 [2024-11-15 12:47:51.173437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.854 [2024-11-15 12:47:51.173479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.854 [2024-11-15 12:47:51.173495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.854 [2024-11-15 12:47:51.173761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.854 [2024-11-15 12:47:51.173995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.854 [2024-11-15 12:47:51.174031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.854 [2024-11-15 12:47:51.174044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.854 [2024-11-15 12:47:51.174055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:10.854 [2024-11-15 12:47:51.186306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:10.854 [2024-11-15 12:47:51.186686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.854 [2024-11-15 12:47:51.186733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:10.854 [2024-11-15 12:47:51.186748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:10.854 [2024-11-15 12:47:51.186989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:10.854 [2024-11-15 12:47:51.187220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:10.854 [2024-11-15 12:47:51.187239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:10.854 [2024-11-15 12:47:51.187250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:10.854 [2024-11-15 12:47:51.187261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.114 [2024-11-15 12:47:51.199692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.114 [2024-11-15 12:47:51.200199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.114 [2024-11-15 12:47:51.200227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.114 [2024-11-15 12:47:51.200241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.114 [2024-11-15 12:47:51.200462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.114 [2024-11-15 12:47:51.200676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.114 [2024-11-15 12:47:51.200694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.114 [2024-11-15 12:47:51.200706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.114 [2024-11-15 12:47:51.200726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.114 [2024-11-15 12:47:51.212758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.114 [2024-11-15 12:47:51.213087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.114 [2024-11-15 12:47:51.213114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.114 [2024-11-15 12:47:51.213130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.114 [2024-11-15 12:47:51.213350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.114 [2024-11-15 12:47:51.213557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.114 [2024-11-15 12:47:51.213575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.114 [2024-11-15 12:47:51.213587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.114 [2024-11-15 12:47:51.213598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.114 [2024-11-15 12:47:51.225865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.114 [2024-11-15 12:47:51.226357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.114 [2024-11-15 12:47:51.226400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.114 [2024-11-15 12:47:51.226416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.114 [2024-11-15 12:47:51.226666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.114 [2024-11-15 12:47:51.226906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.114 [2024-11-15 12:47:51.226926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.114 [2024-11-15 12:47:51.226938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.114 [2024-11-15 12:47:51.226950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.114 [2024-11-15 12:47:51.238853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.114 [2024-11-15 12:47:51.239217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.114 [2024-11-15 12:47:51.239259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.114 [2024-11-15 12:47:51.239274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.114 [2024-11-15 12:47:51.239527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.114 [2024-11-15 12:47:51.239777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.114 [2024-11-15 12:47:51.239798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.114 [2024-11-15 12:47:51.239818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.114 [2024-11-15 12:47:51.239830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.114 [2024-11-15 12:47:51.251961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.114 [2024-11-15 12:47:51.252451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.114 [2024-11-15 12:47:51.252493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.114 [2024-11-15 12:47:51.252509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.114 [2024-11-15 12:47:51.252771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.114 [2024-11-15 12:47:51.252990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.114 [2024-11-15 12:47:51.253023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.114 [2024-11-15 12:47:51.253036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.114 [2024-11-15 12:47:51.253047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.114 [2024-11-15 12:47:51.264975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.114 [2024-11-15 12:47:51.265410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.114 [2024-11-15 12:47:51.265451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.114 [2024-11-15 12:47:51.265466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.114 [2024-11-15 12:47:51.265728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.114 [2024-11-15 12:47:51.265948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.114 [2024-11-15 12:47:51.265967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.114 [2024-11-15 12:47:51.265980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.114 [2024-11-15 12:47:51.265992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.114 [2024-11-15 12:47:51.278261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.114 [2024-11-15 12:47:51.278590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.114 [2024-11-15 12:47:51.278617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.114 [2024-11-15 12:47:51.278632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.115 [2024-11-15 12:47:51.278863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.115 [2024-11-15 12:47:51.279090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.115 [2024-11-15 12:47:51.279108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.115 [2024-11-15 12:47:51.279119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.115 [2024-11-15 12:47:51.279130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.115 [2024-11-15 12:47:51.291316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.115 [2024-11-15 12:47:51.291804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.115 [2024-11-15 12:47:51.291846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.115 [2024-11-15 12:47:51.291862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.115 [2024-11-15 12:47:51.292111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.115 [2024-11-15 12:47:51.292319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.115 [2024-11-15 12:47:51.292337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.115 [2024-11-15 12:47:51.292349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.115 [2024-11-15 12:47:51.292359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.115 [2024-11-15 12:47:51.304384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.115 [2024-11-15 12:47:51.304710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.115 [2024-11-15 12:47:51.304745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.115 [2024-11-15 12:47:51.304761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.115 [2024-11-15 12:47:51.304984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.115 [2024-11-15 12:47:51.305192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.115 [2024-11-15 12:47:51.305210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.115 [2024-11-15 12:47:51.305222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.115 [2024-11-15 12:47:51.305233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.115 [2024-11-15 12:47:51.317510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.115 [2024-11-15 12:47:51.317870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.115 [2024-11-15 12:47:51.317897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.115 [2024-11-15 12:47:51.317912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.115 [2024-11-15 12:47:51.318146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.115 [2024-11-15 12:47:51.318354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.115 [2024-11-15 12:47:51.318373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.115 [2024-11-15 12:47:51.318385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.115 [2024-11-15 12:47:51.318396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.115 [2024-11-15 12:47:51.330525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.115 [2024-11-15 12:47:51.330897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.115 [2024-11-15 12:47:51.330923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.115 [2024-11-15 12:47:51.330944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.115 [2024-11-15 12:47:51.331178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.115 [2024-11-15 12:47:51.331386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.115 [2024-11-15 12:47:51.331405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.115 [2024-11-15 12:47:51.331417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.115 [2024-11-15 12:47:51.331428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.115 [2024-11-15 12:47:51.343536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.115 [2024-11-15 12:47:51.343908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.115 [2024-11-15 12:47:51.343951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.115 [2024-11-15 12:47:51.343966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.115 [2024-11-15 12:47:51.344220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.115 [2024-11-15 12:47:51.344430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.115 [2024-11-15 12:47:51.344448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.115 [2024-11-15 12:47:51.344460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.115 [2024-11-15 12:47:51.344471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.115 [2024-11-15 12:47:51.356631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.115 [2024-11-15 12:47:51.356995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.115 [2024-11-15 12:47:51.357022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.115 [2024-11-15 12:47:51.357037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.115 [2024-11-15 12:47:51.357257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.115 [2024-11-15 12:47:51.357464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.115 [2024-11-15 12:47:51.357482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.115 [2024-11-15 12:47:51.357494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.115 [2024-11-15 12:47:51.357505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.115 [2024-11-15 12:47:51.369711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.115 [2024-11-15 12:47:51.370037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.115 [2024-11-15 12:47:51.370079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.115 [2024-11-15 12:47:51.370094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.115 [2024-11-15 12:47:51.370314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.115 [2024-11-15 12:47:51.370528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.115 [2024-11-15 12:47:51.370546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.115 [2024-11-15 12:47:51.370558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.115 [2024-11-15 12:47:51.370568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.115 [2024-11-15 12:47:51.382793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.115 [2024-11-15 12:47:51.383280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.115 [2024-11-15 12:47:51.383322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.115 [2024-11-15 12:47:51.383338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.115 [2024-11-15 12:47:51.383588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.115 [2024-11-15 12:47:51.383823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.115 [2024-11-15 12:47:51.383843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.115 [2024-11-15 12:47:51.383855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.115 [2024-11-15 12:47:51.383866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.115 [2024-11-15 12:47:51.395812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.115 [2024-11-15 12:47:51.396126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.115 [2024-11-15 12:47:51.396167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.115 [2024-11-15 12:47:51.396182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.115 [2024-11-15 12:47:51.396402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.115 [2024-11-15 12:47:51.396610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.115 [2024-11-15 12:47:51.396628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.115 [2024-11-15 12:47:51.396640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.115 [2024-11-15 12:47:51.396667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.115 [2024-11-15 12:47:51.409108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.115 [2024-11-15 12:47:51.409592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.115 [2024-11-15 12:47:51.409617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.116 [2024-11-15 12:47:51.409648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.116 [2024-11-15 12:47:51.409911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.116 [2024-11-15 12:47:51.410123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.116 [2024-11-15 12:47:51.410141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.116 [2024-11-15 12:47:51.410159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.116 [2024-11-15 12:47:51.410171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.116 [2024-11-15 12:47:51.422269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.116 [2024-11-15 12:47:51.422678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.116 [2024-11-15 12:47:51.422727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.116 [2024-11-15 12:47:51.422745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.116 [2024-11-15 12:47:51.422998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.116 [2024-11-15 12:47:51.423208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.116 [2024-11-15 12:47:51.423226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.116 [2024-11-15 12:47:51.423237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.116 [2024-11-15 12:47:51.423248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.116 [2024-11-15 12:47:51.435329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.116 [2024-11-15 12:47:51.435710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.116 [2024-11-15 12:47:51.435757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.116 [2024-11-15 12:47:51.435773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.116 [2024-11-15 12:47:51.435995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.116 [2024-11-15 12:47:51.436204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.116 [2024-11-15 12:47:51.436222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.116 [2024-11-15 12:47:51.436234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.116 [2024-11-15 12:47:51.436245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.116 [2024-11-15 12:47:51.448369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.116 [2024-11-15 12:47:51.448747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.116 [2024-11-15 12:47:51.448787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.116 [2024-11-15 12:47:51.448801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.116 [2024-11-15 12:47:51.449029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.116 [2024-11-15 12:47:51.449237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.116 [2024-11-15 12:47:51.449256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.116 [2024-11-15 12:47:51.449268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.116 [2024-11-15 12:47:51.449278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.375 [2024-11-15 12:47:51.461757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.375 [2024-11-15 12:47:51.462171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.375 [2024-11-15 12:47:51.462214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.375 [2024-11-15 12:47:51.462229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.375 [2024-11-15 12:47:51.462463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.375 [2024-11-15 12:47:51.462669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.375 [2024-11-15 12:47:51.462688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.375 [2024-11-15 12:47:51.462700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.375 [2024-11-15 12:47:51.462711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.375 [2024-11-15 12:47:51.474864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.375 [2024-11-15 12:47:51.475256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.375 [2024-11-15 12:47:51.475282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.375 [2024-11-15 12:47:51.475296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.375 [2024-11-15 12:47:51.475512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.375 [2024-11-15 12:47:51.475730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.375 [2024-11-15 12:47:51.475763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.375 [2024-11-15 12:47:51.475776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.375 [2024-11-15 12:47:51.475788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.375 [2024-11-15 12:47:51.487993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.375 [2024-11-15 12:47:51.488354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.375 [2024-11-15 12:47:51.488397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.375 [2024-11-15 12:47:51.488412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.375 [2024-11-15 12:47:51.488666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.375 [2024-11-15 12:47:51.488901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.375 [2024-11-15 12:47:51.488920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.375 [2024-11-15 12:47:51.488933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.375 [2024-11-15 12:47:51.488944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.375 [2024-11-15 12:47:51.501090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.375 [2024-11-15 12:47:51.501404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.375 [2024-11-15 12:47:51.501429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.375 [2024-11-15 12:47:51.501448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.375 [2024-11-15 12:47:51.501642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.375 [2024-11-15 12:47:51.501880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.375 [2024-11-15 12:47:51.501899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.375 [2024-11-15 12:47:51.501912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.375 [2024-11-15 12:47:51.501923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.375 [2024-11-15 12:47:51.514212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.375 [2024-11-15 12:47:51.514574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.375 [2024-11-15 12:47:51.514617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.375 [2024-11-15 12:47:51.514632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.375 [2024-11-15 12:47:51.514894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.375 [2024-11-15 12:47:51.515105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.375 [2024-11-15 12:47:51.515124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.375 [2024-11-15 12:47:51.515136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.376 [2024-11-15 12:47:51.515146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.376 5616.25 IOPS, 21.94 MiB/s [2024-11-15T11:47:51.720Z] [2024-11-15 12:47:51.528542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.376 [2024-11-15 12:47:51.529037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.376 [2024-11-15 12:47:51.529078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.376 [2024-11-15 12:47:51.529095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.376 [2024-11-15 12:47:51.529345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.376 [2024-11-15 12:47:51.529537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.376 [2024-11-15 12:47:51.529555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.376 [2024-11-15 12:47:51.529568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.376 [2024-11-15 12:47:51.529579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.376 [2024-11-15 12:47:51.541507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.376 [2024-11-15 12:47:51.541885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.376 [2024-11-15 12:47:51.541927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.376 [2024-11-15 12:47:51.541942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.376 [2024-11-15 12:47:51.542163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.376 [2024-11-15 12:47:51.542377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.376 [2024-11-15 12:47:51.542395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.376 [2024-11-15 12:47:51.542407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.376 [2024-11-15 12:47:51.542418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.376 [2024-11-15 12:47:51.554585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.376 [2024-11-15 12:47:51.554967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.376 [2024-11-15 12:47:51.554994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.376 [2024-11-15 12:47:51.555024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.376 [2024-11-15 12:47:51.555245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.376 [2024-11-15 12:47:51.555453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.376 [2024-11-15 12:47:51.555471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.376 [2024-11-15 12:47:51.555483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.376 [2024-11-15 12:47:51.555494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.376 [2024-11-15 12:47:51.567744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.376 [2024-11-15 12:47:51.568106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.376 [2024-11-15 12:47:51.568134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.376 [2024-11-15 12:47:51.568164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.376 [2024-11-15 12:47:51.568415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.376 [2024-11-15 12:47:51.568622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.376 [2024-11-15 12:47:51.568641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.376 [2024-11-15 12:47:51.568652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.376 [2024-11-15 12:47:51.568663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.376 [2024-11-15 12:47:51.580873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.376 [2024-11-15 12:47:51.581290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.376 [2024-11-15 12:47:51.581330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.376 [2024-11-15 12:47:51.581346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.376 [2024-11-15 12:47:51.581565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.376 [2024-11-15 12:47:51.581801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.376 [2024-11-15 12:47:51.581820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.376 [2024-11-15 12:47:51.581838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.376 [2024-11-15 12:47:51.581850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.376 [2024-11-15 12:47:51.593887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.376 [2024-11-15 12:47:51.594245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.376 [2024-11-15 12:47:51.594272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.376 [2024-11-15 12:47:51.594287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.376 [2024-11-15 12:47:51.594492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.376 [2024-11-15 12:47:51.594730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.376 [2024-11-15 12:47:51.594749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.376 [2024-11-15 12:47:51.594761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.376 [2024-11-15 12:47:51.594773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.376 [2024-11-15 12:47:51.606889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.376 [2024-11-15 12:47:51.607216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.376 [2024-11-15 12:47:51.607243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.376 [2024-11-15 12:47:51.607257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.376 [2024-11-15 12:47:51.607472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.376 [2024-11-15 12:47:51.607680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.376 [2024-11-15 12:47:51.607698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.376 [2024-11-15 12:47:51.607710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.376 [2024-11-15 12:47:51.607730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.376 [2024-11-15 12:47:51.620015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.376 [2024-11-15 12:47:51.620410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.376 [2024-11-15 12:47:51.620481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.376 [2024-11-15 12:47:51.620496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.376 [2024-11-15 12:47:51.620756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.376 [2024-11-15 12:47:51.620955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.376 [2024-11-15 12:47:51.620973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.376 [2024-11-15 12:47:51.620985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.376 [2024-11-15 12:47:51.620997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.376 [2024-11-15 12:47:51.633070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.376 [2024-11-15 12:47:51.633559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.376 [2024-11-15 12:47:51.633601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.376 [2024-11-15 12:47:51.633617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.376 [2024-11-15 12:47:51.633865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.376 [2024-11-15 12:47:51.634092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.376 [2024-11-15 12:47:51.634111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.376 [2024-11-15 12:47:51.634123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.376 [2024-11-15 12:47:51.634134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.376 [2024-11-15 12:47:51.646346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.376 [2024-11-15 12:47:51.646707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.376 [2024-11-15 12:47:51.646742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.376 [2024-11-15 12:47:51.646758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.376 [2024-11-15 12:47:51.646972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.376 [2024-11-15 12:47:51.647237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.376 [2024-11-15 12:47:51.647256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.377 [2024-11-15 12:47:51.647269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.377 [2024-11-15 12:47:51.647281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.377 [2024-11-15 12:47:51.659601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.377 [2024-11-15 12:47:51.660044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.377 [2024-11-15 12:47:51.660071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.377 [2024-11-15 12:47:51.660102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.377 [2024-11-15 12:47:51.660342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.377 [2024-11-15 12:47:51.660534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.377 [2024-11-15 12:47:51.660552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.377 [2024-11-15 12:47:51.660564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.377 [2024-11-15 12:47:51.660575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.377 [2024-11-15 12:47:51.672838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.377 [2024-11-15 12:47:51.673218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.377 [2024-11-15 12:47:51.673244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.377 [2024-11-15 12:47:51.673279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.377 [2024-11-15 12:47:51.673504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.377 [2024-11-15 12:47:51.673714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.377 [2024-11-15 12:47:51.673757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.377 [2024-11-15 12:47:51.673770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.377 [2024-11-15 12:47:51.673781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.377 [2024-11-15 12:47:51.685938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.377 [2024-11-15 12:47:51.686375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.377 [2024-11-15 12:47:51.686431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.377 [2024-11-15 12:47:51.686445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.377 [2024-11-15 12:47:51.686675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.377 [2024-11-15 12:47:51.686895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.377 [2024-11-15 12:47:51.686915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.377 [2024-11-15 12:47:51.686927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.377 [2024-11-15 12:47:51.686938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.377 [2024-11-15 12:47:51.699073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.377 [2024-11-15 12:47:51.699559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.377 [2024-11-15 12:47:51.699600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.377 [2024-11-15 12:47:51.699615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.377 [2024-11-15 12:47:51.699865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.377 [2024-11-15 12:47:51.700091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.377 [2024-11-15 12:47:51.700110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.377 [2024-11-15 12:47:51.700122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.377 [2024-11-15 12:47:51.700132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.377 [2024-11-15 12:47:51.712183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.377 [2024-11-15 12:47:51.712658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.377 [2024-11-15 12:47:51.712709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.377 [2024-11-15 12:47:51.712734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.377 [2024-11-15 12:47:51.712998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.377 [2024-11-15 12:47:51.713229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.377 [2024-11-15 12:47:51.713249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.377 [2024-11-15 12:47:51.713262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.377 [2024-11-15 12:47:51.713274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.637 [2024-11-15 12:47:51.725417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.637 [2024-11-15 12:47:51.725906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.637 [2024-11-15 12:47:51.725947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.637 [2024-11-15 12:47:51.725964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.637 [2024-11-15 12:47:51.726207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.637 [2024-11-15 12:47:51.726399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.637 [2024-11-15 12:47:51.726417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.637 [2024-11-15 12:47:51.726429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.637 [2024-11-15 12:47:51.726440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.637 [2024-11-15 12:47:51.738560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.637 [2024-11-15 12:47:51.738894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.637 [2024-11-15 12:47:51.738922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.637 [2024-11-15 12:47:51.738937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.637 [2024-11-15 12:47:51.739158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.637 [2024-11-15 12:47:51.739368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.637 [2024-11-15 12:47:51.739386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.637 [2024-11-15 12:47:51.739398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.637 [2024-11-15 12:47:51.739409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.637 [2024-11-15 12:47:51.751745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.637 [2024-11-15 12:47:51.752107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.637 [2024-11-15 12:47:51.752150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.637 [2024-11-15 12:47:51.752166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.637 [2024-11-15 12:47:51.752418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.637 [2024-11-15 12:47:51.752625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.637 [2024-11-15 12:47:51.752643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.637 [2024-11-15 12:47:51.752659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.637 [2024-11-15 12:47:51.752671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.637 [2024-11-15 12:47:51.765147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.637 [2024-11-15 12:47:51.765622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.637 [2024-11-15 12:47:51.765675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.637 [2024-11-15 12:47:51.765690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.637 [2024-11-15 12:47:51.765970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.637 [2024-11-15 12:47:51.766196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.637 [2024-11-15 12:47:51.766214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.637 [2024-11-15 12:47:51.766226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.637 [2024-11-15 12:47:51.766237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.637 [2024-11-15 12:47:51.778367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.637 [2024-11-15 12:47:51.778729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.637 [2024-11-15 12:47:51.778772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.637 [2024-11-15 12:47:51.778795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.637 [2024-11-15 12:47:51.779023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.637 [2024-11-15 12:47:51.779250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.637 [2024-11-15 12:47:51.779268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.637 [2024-11-15 12:47:51.779280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.637 [2024-11-15 12:47:51.779291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.637 [2024-11-15 12:47:51.791639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.637 [2024-11-15 12:47:51.792038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.637 [2024-11-15 12:47:51.792080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.637 [2024-11-15 12:47:51.792094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.638 [2024-11-15 12:47:51.792321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.638 [2024-11-15 12:47:51.792514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.638 [2024-11-15 12:47:51.792532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.638 [2024-11-15 12:47:51.792543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.638 [2024-11-15 12:47:51.792554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.638 [2024-11-15 12:47:51.804836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.638 [2024-11-15 12:47:51.805199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.638 [2024-11-15 12:47:51.805239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.638 [2024-11-15 12:47:51.805255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.638 [2024-11-15 12:47:51.805476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.638 [2024-11-15 12:47:51.805683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.638 [2024-11-15 12:47:51.805702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.638 [2024-11-15 12:47:51.805713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.638 [2024-11-15 12:47:51.805750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.638 [2024-11-15 12:47:51.817815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.638 [2024-11-15 12:47:51.818191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.638 [2024-11-15 12:47:51.818232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.638 [2024-11-15 12:47:51.818247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.638 [2024-11-15 12:47:51.818468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.638 [2024-11-15 12:47:51.818675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.638 [2024-11-15 12:47:51.818694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.638 [2024-11-15 12:47:51.818705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.638 [2024-11-15 12:47:51.818716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.638 [2024-11-15 12:47:51.830819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.638 [2024-11-15 12:47:51.831179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.638 [2024-11-15 12:47:51.831220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.638 [2024-11-15 12:47:51.831236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.638 [2024-11-15 12:47:51.831480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.638 [2024-11-15 12:47:51.831671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.638 [2024-11-15 12:47:51.831689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.638 [2024-11-15 12:47:51.831702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.638 [2024-11-15 12:47:51.831713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.638 [2024-11-15 12:47:51.843798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.638 [2024-11-15 12:47:51.844163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.638 [2024-11-15 12:47:51.844205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.638 [2024-11-15 12:47:51.844226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.638 [2024-11-15 12:47:51.844493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.638 [2024-11-15 12:47:51.844705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.638 [2024-11-15 12:47:51.844734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.638 [2024-11-15 12:47:51.844748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.638 [2024-11-15 12:47:51.844759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.638 [2024-11-15 12:47:51.856985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.638 [2024-11-15 12:47:51.857351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.638 [2024-11-15 12:47:51.857393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.638 [2024-11-15 12:47:51.857408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.638 [2024-11-15 12:47:51.857655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.638 [2024-11-15 12:47:51.857891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.638 [2024-11-15 12:47:51.857911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.638 [2024-11-15 12:47:51.857923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.638 [2024-11-15 12:47:51.857935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.638 [2024-11-15 12:47:51.870000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.638 [2024-11-15 12:47:51.870489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.638 [2024-11-15 12:47:51.870531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.638 [2024-11-15 12:47:51.870547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.638 [2024-11-15 12:47:51.870808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.638 [2024-11-15 12:47:51.871006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.638 [2024-11-15 12:47:51.871025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.638 [2024-11-15 12:47:51.871052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.638 [2024-11-15 12:47:51.871064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.638 [2024-11-15 12:47:51.883011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.638 [2024-11-15 12:47:51.883496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.638 [2024-11-15 12:47:51.883537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.638 [2024-11-15 12:47:51.883554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.638 [2024-11-15 12:47:51.883818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.638 [2024-11-15 12:47:51.884024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.638 [2024-11-15 12:47:51.884043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.638 [2024-11-15 12:47:51.884056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.638 [2024-11-15 12:47:51.884067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.638 [2024-11-15 12:47:51.896039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.638 [2024-11-15 12:47:51.896425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.638 [2024-11-15 12:47:51.896451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.638 [2024-11-15 12:47:51.896465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.638 [2024-11-15 12:47:51.896679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.638 [2024-11-15 12:47:51.896915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.638 [2024-11-15 12:47:51.896934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.638 [2024-11-15 12:47:51.896947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.638 [2024-11-15 12:47:51.896958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.638 [2024-11-15 12:47:51.909358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.638 [2024-11-15 12:47:51.909785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.638 [2024-11-15 12:47:51.909812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.638 [2024-11-15 12:47:51.909843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.638 [2024-11-15 12:47:51.910083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.638 [2024-11-15 12:47:51.910290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.638 [2024-11-15 12:47:51.910308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.638 [2024-11-15 12:47:51.910319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.638 [2024-11-15 12:47:51.910330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.638 [2024-11-15 12:47:51.922417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.638 [2024-11-15 12:47:51.922780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.638 [2024-11-15 12:47:51.922820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.639 [2024-11-15 12:47:51.922835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.639 [2024-11-15 12:47:51.923076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.639 [2024-11-15 12:47:51.923269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.639 [2024-11-15 12:47:51.923287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.639 [2024-11-15 12:47:51.923303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.639 [2024-11-15 12:47:51.923315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.639 [2024-11-15 12:47:51.935604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.639 [2024-11-15 12:47:51.935939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.639 [2024-11-15 12:47:51.935967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.639 [2024-11-15 12:47:51.935983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.639 [2024-11-15 12:47:51.936204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.639 [2024-11-15 12:47:51.936412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.639 [2024-11-15 12:47:51.936430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.639 [2024-11-15 12:47:51.936442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.639 [2024-11-15 12:47:51.936453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.639 [2024-11-15 12:47:51.948749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.639 [2024-11-15 12:47:51.949060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.639 [2024-11-15 12:47:51.949084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.639 [2024-11-15 12:47:51.949098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.639 [2024-11-15 12:47:51.949292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.639 [2024-11-15 12:47:51.949499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.639 [2024-11-15 12:47:51.949517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.639 [2024-11-15 12:47:51.949529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.639 [2024-11-15 12:47:51.949540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.639 [2024-11-15 12:47:51.961791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.639 [2024-11-15 12:47:51.962216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.639 [2024-11-15 12:47:51.962243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.639 [2024-11-15 12:47:51.962274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.639 [2024-11-15 12:47:51.962513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.639 [2024-11-15 12:47:51.962729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.639 [2024-11-15 12:47:51.962762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.639 [2024-11-15 12:47:51.962774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.639 [2024-11-15 12:47:51.962786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.639 [2024-11-15 12:47:51.975092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.639 [2024-11-15 12:47:51.975522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.639 [2024-11-15 12:47:51.975563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.639 [2024-11-15 12:47:51.975579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.639 [2024-11-15 12:47:51.975827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.639 [2024-11-15 12:47:51.976057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.639 [2024-11-15 12:47:51.976075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.639 [2024-11-15 12:47:51.976087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.639 [2024-11-15 12:47:51.976098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.899 [2024-11-15 12:47:51.988118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.899 [2024-11-15 12:47:51.988446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.899 [2024-11-15 12:47:51.988473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.899 [2024-11-15 12:47:51.988488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.899 [2024-11-15 12:47:51.988710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.899 [2024-11-15 12:47:51.988944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.899 [2024-11-15 12:47:51.988965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.899 [2024-11-15 12:47:51.988978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.899 [2024-11-15 12:47:51.988990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.899 [2024-11-15 12:47:52.001271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.899 [2024-11-15 12:47:52.001644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.899 [2024-11-15 12:47:52.001685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.899 [2024-11-15 12:47:52.001700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.899 [2024-11-15 12:47:52.001974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.899 [2024-11-15 12:47:52.002184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.899 [2024-11-15 12:47:52.002202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.899 [2024-11-15 12:47:52.002214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.899 [2024-11-15 12:47:52.002225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.899 [2024-11-15 12:47:52.014269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.899 [2024-11-15 12:47:52.014755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.899 [2024-11-15 12:47:52.014797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.899 [2024-11-15 12:47:52.014818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.899 [2024-11-15 12:47:52.015069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.899 [2024-11-15 12:47:52.015276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.899 [2024-11-15 12:47:52.015294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.899 [2024-11-15 12:47:52.015307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.899 [2024-11-15 12:47:52.015318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.899 [2024-11-15 12:47:52.027272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.899 [2024-11-15 12:47:52.027634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.899 [2024-11-15 12:47:52.027676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.899 [2024-11-15 12:47:52.027691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.899 [2024-11-15 12:47:52.027966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.899 [2024-11-15 12:47:52.028177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.899 [2024-11-15 12:47:52.028195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.899 [2024-11-15 12:47:52.028207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.899 [2024-11-15 12:47:52.028218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.899 [2024-11-15 12:47:52.040261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.899 [2024-11-15 12:47:52.040639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.899 [2024-11-15 12:47:52.040680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.899 [2024-11-15 12:47:52.040696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.899 [2024-11-15 12:47:52.040926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.899 [2024-11-15 12:47:52.041151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.899 [2024-11-15 12:47:52.041169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.899 [2024-11-15 12:47:52.041181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.899 [2024-11-15 12:47:52.041192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.899 [2024-11-15 12:47:52.053426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.899 [2024-11-15 12:47:52.053851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.899 [2024-11-15 12:47:52.053880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.899 [2024-11-15 12:47:52.053895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.899 [2024-11-15 12:47:52.054136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.899 [2024-11-15 12:47:52.054333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.899 [2024-11-15 12:47:52.054352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.899 [2024-11-15 12:47:52.054364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.899 [2024-11-15 12:47:52.054375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.899 [2024-11-15 12:47:52.066575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.899 [2024-11-15 12:47:52.066945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.899 [2024-11-15 12:47:52.066973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.900 [2024-11-15 12:47:52.066988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.900 [2024-11-15 12:47:52.067222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.900 [2024-11-15 12:47:52.067414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.900 [2024-11-15 12:47:52.067432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.900 [2024-11-15 12:47:52.067445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.900 [2024-11-15 12:47:52.067456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.900 [2024-11-15 12:47:52.079732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.900 [2024-11-15 12:47:52.080080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.900 [2024-11-15 12:47:52.080123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.900 [2024-11-15 12:47:52.080138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.900 [2024-11-15 12:47:52.080391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.900 [2024-11-15 12:47:52.080598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.900 [2024-11-15 12:47:52.080617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.900 [2024-11-15 12:47:52.080629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.900 [2024-11-15 12:47:52.080640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.900 [2024-11-15 12:47:52.092903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.900 [2024-11-15 12:47:52.093302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.900 [2024-11-15 12:47:52.093329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.900 [2024-11-15 12:47:52.093344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.900 [2024-11-15 12:47:52.093558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.900 [2024-11-15 12:47:52.093759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.900 [2024-11-15 12:47:52.093796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.900 [2024-11-15 12:47:52.093813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.900 [2024-11-15 12:47:52.093825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.900 [2024-11-15 12:47:52.106133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.900 [2024-11-15 12:47:52.106562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.900 [2024-11-15 12:47:52.106605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.900 [2024-11-15 12:47:52.106622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.900 [2024-11-15 12:47:52.106870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.900 [2024-11-15 12:47:52.107082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.900 [2024-11-15 12:47:52.107101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.900 [2024-11-15 12:47:52.107113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.900 [2024-11-15 12:47:52.107124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.900 [2024-11-15 12:47:52.119488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.900 [2024-11-15 12:47:52.119851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.900 [2024-11-15 12:47:52.119880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.900 [2024-11-15 12:47:52.119896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.900 [2024-11-15 12:47:52.120130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.900 [2024-11-15 12:47:52.120344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.900 [2024-11-15 12:47:52.120363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.900 [2024-11-15 12:47:52.120375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.900 [2024-11-15 12:47:52.120386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.900 [2024-11-15 12:47:52.132931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.900 [2024-11-15 12:47:52.133335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.900 [2024-11-15 12:47:52.133361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.900 [2024-11-15 12:47:52.133376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.900 [2024-11-15 12:47:52.133611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.900 [2024-11-15 12:47:52.133855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.900 [2024-11-15 12:47:52.133875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.900 [2024-11-15 12:47:52.133888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.900 [2024-11-15 12:47:52.133900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.900 [2024-11-15 12:47:52.146296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.900 [2024-11-15 12:47:52.146694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.900 [2024-11-15 12:47:52.146730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.900 [2024-11-15 12:47:52.146748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.900 [2024-11-15 12:47:52.146975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.900 [2024-11-15 12:47:52.147227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.900 [2024-11-15 12:47:52.147247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.900 [2024-11-15 12:47:52.147260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.900 [2024-11-15 12:47:52.147272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.900 [2024-11-15 12:47:52.159984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.900 [2024-11-15 12:47:52.160414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.900 [2024-11-15 12:47:52.160442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.900 [2024-11-15 12:47:52.160458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.900 [2024-11-15 12:47:52.160687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.900 [2024-11-15 12:47:52.160928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.900 [2024-11-15 12:47:52.160949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.900 [2024-11-15 12:47:52.160963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.900 [2024-11-15 12:47:52.160975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.900 [2024-11-15 12:47:52.173514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.900 [2024-11-15 12:47:52.173823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.900 [2024-11-15 12:47:52.173852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.900 [2024-11-15 12:47:52.173867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.900 [2024-11-15 12:47:52.174107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.900 [2024-11-15 12:47:52.174300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.900 [2024-11-15 12:47:52.174318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.900 [2024-11-15 12:47:52.174330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.900 [2024-11-15 12:47:52.174341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.900 [2024-11-15 12:47:52.186890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.900 [2024-11-15 12:47:52.187273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.900 [2024-11-15 12:47:52.187301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.900 [2024-11-15 12:47:52.187323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.900 [2024-11-15 12:47:52.187567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.900 [2024-11-15 12:47:52.187816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.900 [2024-11-15 12:47:52.187838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.900 [2024-11-15 12:47:52.187852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.900 [2024-11-15 12:47:52.187864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.900 [2024-11-15 12:47:52.200375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.900 [2024-11-15 12:47:52.200805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.900 [2024-11-15 12:47:52.200833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.901 [2024-11-15 12:47:52.200849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.901 [2024-11-15 12:47:52.201076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.901 [2024-11-15 12:47:52.201284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.901 [2024-11-15 12:47:52.201302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.901 [2024-11-15 12:47:52.201314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.901 [2024-11-15 12:47:52.201325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.901 [2024-11-15 12:47:52.213769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.901 [2024-11-15 12:47:52.214245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.901 [2024-11-15 12:47:52.214287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.901 [2024-11-15 12:47:52.214303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.901 [2024-11-15 12:47:52.214554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.901 [2024-11-15 12:47:52.214797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.901 [2024-11-15 12:47:52.214819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.901 [2024-11-15 12:47:52.214832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.901 [2024-11-15 12:47:52.214844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.901 [2024-11-15 12:47:52.227122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.901 [2024-11-15 12:47:52.227506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.901 [2024-11-15 12:47:52.227533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:11.901 [2024-11-15 12:47:52.227548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:11.901 [2024-11-15 12:47:52.227796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:11.901 [2024-11-15 12:47:52.228021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.901 [2024-11-15 12:47:52.228039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.901 [2024-11-15 12:47:52.228051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.901 [2024-11-15 12:47:52.228062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.901 [2024-11-15 12:47:52.240561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.160 [2024-11-15 12:47:52.241057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.160 [2024-11-15 12:47:52.241085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.160 [2024-11-15 12:47:52.241100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.160 [2024-11-15 12:47:52.241313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.160 [2024-11-15 12:47:52.241533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.160 [2024-11-15 12:47:52.241552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.160 [2024-11-15 12:47:52.241564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.160 [2024-11-15 12:47:52.241590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.160 [2024-11-15 12:47:52.253692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.160 [2024-11-15 12:47:52.254156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.160 [2024-11-15 12:47:52.254185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.160 [2024-11-15 12:47:52.254201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.160 [2024-11-15 12:47:52.254468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.160 [2024-11-15 12:47:52.254661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.160 [2024-11-15 12:47:52.254679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.160 [2024-11-15 12:47:52.254690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.160 [2024-11-15 12:47:52.254715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.160 [2024-11-15 12:47:52.266912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.160 [2024-11-15 12:47:52.267325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.160 [2024-11-15 12:47:52.267377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.160 [2024-11-15 12:47:52.267391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.160 [2024-11-15 12:47:52.267634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.160 [2024-11-15 12:47:52.267875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.160 [2024-11-15 12:47:52.267896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.160 [2024-11-15 12:47:52.267913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.160 [2024-11-15 12:47:52.267926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.160 [2024-11-15 12:47:52.280249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.160 [2024-11-15 12:47:52.280621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.160 [2024-11-15 12:47:52.280710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.160 [2024-11-15 12:47:52.280735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.160 [2024-11-15 12:47:52.280990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.160 [2024-11-15 12:47:52.281214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.160 [2024-11-15 12:47:52.281232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.160 [2024-11-15 12:47:52.281244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.160 [2024-11-15 12:47:52.281255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.160 [2024-11-15 12:47:52.293297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.160 [2024-11-15 12:47:52.293660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.161 [2024-11-15 12:47:52.293703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.161 [2024-11-15 12:47:52.293729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.161 [2024-11-15 12:47:52.293999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.161 [2024-11-15 12:47:52.294207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.161 [2024-11-15 12:47:52.294225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.161 [2024-11-15 12:47:52.294237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.161 [2024-11-15 12:47:52.294248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.161 [2024-11-15 12:47:52.306369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.161 [2024-11-15 12:47:52.306804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.161 [2024-11-15 12:47:52.306846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.161 [2024-11-15 12:47:52.306861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.161 [2024-11-15 12:47:52.307111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.161 [2024-11-15 12:47:52.307319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.161 [2024-11-15 12:47:52.307337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.161 [2024-11-15 12:47:52.307349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.161 [2024-11-15 12:47:52.307360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.161 [2024-11-15 12:47:52.319548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.161 [2024-11-15 12:47:52.319922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.161 [2024-11-15 12:47:52.319966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.161 [2024-11-15 12:47:52.319981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.161 [2024-11-15 12:47:52.320249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.161 [2024-11-15 12:47:52.320441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.161 [2024-11-15 12:47:52.320459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.161 [2024-11-15 12:47:52.320471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.161 [2024-11-15 12:47:52.320482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.161 [2024-11-15 12:47:52.332642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.161 [2024-11-15 12:47:52.333013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.161 [2024-11-15 12:47:52.333040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.161 [2024-11-15 12:47:52.333056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.161 [2024-11-15 12:47:52.333292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.161 [2024-11-15 12:47:52.333499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.161 [2024-11-15 12:47:52.333517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.161 [2024-11-15 12:47:52.333529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.161 [2024-11-15 12:47:52.333541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.161 [2024-11-15 12:47:52.345779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.161 [2024-11-15 12:47:52.346189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.161 [2024-11-15 12:47:52.346232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.161 [2024-11-15 12:47:52.346247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.161 [2024-11-15 12:47:52.346500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.161 [2024-11-15 12:47:52.346707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.161 [2024-11-15 12:47:52.346749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.161 [2024-11-15 12:47:52.346762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.161 [2024-11-15 12:47:52.346774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.161 [2024-11-15 12:47:52.358838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.161 [2024-11-15 12:47:52.359162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.161 [2024-11-15 12:47:52.359190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.161 [2024-11-15 12:47:52.359210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.161 [2024-11-15 12:47:52.359431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.161 [2024-11-15 12:47:52.359640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.161 [2024-11-15 12:47:52.359658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.161 [2024-11-15 12:47:52.359670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.161 [2024-11-15 12:47:52.359696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.161 [2024-11-15 12:47:52.371908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.161 [2024-11-15 12:47:52.372282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.161 [2024-11-15 12:47:52.372323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.161 [2024-11-15 12:47:52.372339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.161 [2024-11-15 12:47:52.372559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.161 [2024-11-15 12:47:52.372794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.161 [2024-11-15 12:47:52.372813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.161 [2024-11-15 12:47:52.372826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.161 [2024-11-15 12:47:52.372837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.161 [2024-11-15 12:47:52.384871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.161 [2024-11-15 12:47:52.385181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.161 [2024-11-15 12:47:52.385222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.161 [2024-11-15 12:47:52.385237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.161 [2024-11-15 12:47:52.385451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.161 [2024-11-15 12:47:52.385660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.161 [2024-11-15 12:47:52.385678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.161 [2024-11-15 12:47:52.385690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.161 [2024-11-15 12:47:52.385701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.161 [2024-11-15 12:47:52.398020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.161 [2024-11-15 12:47:52.398443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.161 [2024-11-15 12:47:52.398471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.161 [2024-11-15 12:47:52.398486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.161 [2024-11-15 12:47:52.398716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.161 [2024-11-15 12:47:52.398963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.161 [2024-11-15 12:47:52.398984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.161 [2024-11-15 12:47:52.398997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.161 [2024-11-15 12:47:52.399010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.161 [2024-11-15 12:47:52.411158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.161 [2024-11-15 12:47:52.411596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.161 [2024-11-15 12:47:52.411638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.161 [2024-11-15 12:47:52.411654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.161 [2024-11-15 12:47:52.411904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.161 [2024-11-15 12:47:52.412115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.161 [2024-11-15 12:47:52.412133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.161 [2024-11-15 12:47:52.412145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.161 [2024-11-15 12:47:52.412155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.161 [2024-11-15 12:47:52.424361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.161 [2024-11-15 12:47:52.424729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.162 [2024-11-15 12:47:52.424756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.162 [2024-11-15 12:47:52.424771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.162 [2024-11-15 12:47:52.425005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.162 [2024-11-15 12:47:52.425197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.162 [2024-11-15 12:47:52.425216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.162 [2024-11-15 12:47:52.425227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.162 [2024-11-15 12:47:52.425238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.162 [2024-11-15 12:47:52.437357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.162 [2024-11-15 12:47:52.437844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.162 [2024-11-15 12:47:52.437885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.162 [2024-11-15 12:47:52.437901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.162 [2024-11-15 12:47:52.438144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.162 [2024-11-15 12:47:52.438336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.162 [2024-11-15 12:47:52.438354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.162 [2024-11-15 12:47:52.438370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.162 [2024-11-15 12:47:52.438382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.162 [2024-11-15 12:47:52.450509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.162 [2024-11-15 12:47:52.450916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.162 [2024-11-15 12:47:52.450943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.162 [2024-11-15 12:47:52.450958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.162 [2024-11-15 12:47:52.451179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.162 [2024-11-15 12:47:52.451388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.162 [2024-11-15 12:47:52.451406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.162 [2024-11-15 12:47:52.451418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.162 [2024-11-15 12:47:52.451429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.162 [2024-11-15 12:47:52.463479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.162 [2024-11-15 12:47:52.463848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.162 [2024-11-15 12:47:52.463891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.162 [2024-11-15 12:47:52.463906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.162 [2024-11-15 12:47:52.464160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.162 [2024-11-15 12:47:52.464367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.162 [2024-11-15 12:47:52.464385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.162 [2024-11-15 12:47:52.464397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.162 [2024-11-15 12:47:52.464408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.162 [2024-11-15 12:47:52.476532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.162 [2024-11-15 12:47:52.476907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.162 [2024-11-15 12:47:52.476935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.162 [2024-11-15 12:47:52.476951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.162 [2024-11-15 12:47:52.477192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.162 [2024-11-15 12:47:52.477390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.162 [2024-11-15 12:47:52.477409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.162 [2024-11-15 12:47:52.477421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.162 [2024-11-15 12:47:52.477432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.162 [2024-11-15 12:47:52.489542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.162 [2024-11-15 12:47:52.489970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.162 [2024-11-15 12:47:52.490011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.162 [2024-11-15 12:47:52.490027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.162 [2024-11-15 12:47:52.490267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.162 [2024-11-15 12:47:52.490474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.162 [2024-11-15 12:47:52.490492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.162 [2024-11-15 12:47:52.490504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.162 [2024-11-15 12:47:52.490515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.422 [2024-11-15 12:47:52.502983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.422 [2024-11-15 12:47:52.503391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.422 [2024-11-15 12:47:52.503432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.422 [2024-11-15 12:47:52.503448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.422 [2024-11-15 12:47:52.503695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.422 [2024-11-15 12:47:52.503915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.422 [2024-11-15 12:47:52.503935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.422 [2024-11-15 12:47:52.503947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.422 [2024-11-15 12:47:52.503959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.422 [2024-11-15 12:47:52.516025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.422 [2024-11-15 12:47:52.516521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.422 [2024-11-15 12:47:52.516562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.422 [2024-11-15 12:47:52.516579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.422 [2024-11-15 12:47:52.516826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.422 [2024-11-15 12:47:52.517062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.422 [2024-11-15 12:47:52.517080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.422 [2024-11-15 12:47:52.517092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.422 [2024-11-15 12:47:52.517103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.422 4493.00 IOPS, 17.55 MiB/s [2024-11-15T11:47:52.766Z] [2024-11-15 12:47:52.530344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.422 [2024-11-15 12:47:52.530814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.422 [2024-11-15 12:47:52.530842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.422 [2024-11-15 12:47:52.530880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.422 [2024-11-15 12:47:52.531135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.422 [2024-11-15 12:47:52.531328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.422 [2024-11-15 12:47:52.531346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.422 [2024-11-15 12:47:52.531358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.422 [2024-11-15 12:47:52.531369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.422 [2024-11-15 12:47:52.543481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.422 [2024-11-15 12:47:52.543822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.422 [2024-11-15 12:47:52.543890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.422 [2024-11-15 12:47:52.543905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.422 [2024-11-15 12:47:52.544138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.422 [2024-11-15 12:47:52.544345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.422 [2024-11-15 12:47:52.544364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.422 [2024-11-15 12:47:52.544376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.422 [2024-11-15 12:47:52.544386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.422 [2024-11-15 12:47:52.556515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.422 [2024-11-15 12:47:52.556884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.422 [2024-11-15 12:47:52.556927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.422 [2024-11-15 12:47:52.556943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.422 [2024-11-15 12:47:52.557194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.422 [2024-11-15 12:47:52.557401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.422 [2024-11-15 12:47:52.557419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.422 [2024-11-15 12:47:52.557431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.422 [2024-11-15 12:47:52.557442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.422 [2024-11-15 12:47:52.569564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.422 [2024-11-15 12:47:52.569880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.422 [2024-11-15 12:47:52.569906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.422 [2024-11-15 12:47:52.569921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.422 [2024-11-15 12:47:52.570114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.422 [2024-11-15 12:47:52.570326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.422 [2024-11-15 12:47:52.570344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.422 [2024-11-15 12:47:52.570356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.422 [2024-11-15 12:47:52.570367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.422 [2024-11-15 12:47:52.582674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.422 [2024-11-15 12:47:52.583043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.422 [2024-11-15 12:47:52.583086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.422 [2024-11-15 12:47:52.583101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.422 [2024-11-15 12:47:52.583353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.422 [2024-11-15 12:47:52.583545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.422 [2024-11-15 12:47:52.583563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.422 [2024-11-15 12:47:52.583575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.422 [2024-11-15 12:47:52.583586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.422 [2024-11-15 12:47:52.595679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.422 [2024-11-15 12:47:52.596107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.422 [2024-11-15 12:47:52.596149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.422 [2024-11-15 12:47:52.596165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.422 [2024-11-15 12:47:52.596405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.422 [2024-11-15 12:47:52.596613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.423 [2024-11-15 12:47:52.596631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.423 [2024-11-15 12:47:52.596643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.423 [2024-11-15 12:47:52.596654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.423 [2024-11-15 12:47:52.608780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.423 [2024-11-15 12:47:52.609110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.423 [2024-11-15 12:47:52.609137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.423 [2024-11-15 12:47:52.609152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.423 [2024-11-15 12:47:52.609372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.423 [2024-11-15 12:47:52.609581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.423 [2024-11-15 12:47:52.609600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.423 [2024-11-15 12:47:52.609616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.423 [2024-11-15 12:47:52.609628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.423 [2024-11-15 12:47:52.621760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.423 [2024-11-15 12:47:52.622143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.423 [2024-11-15 12:47:52.622184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.423 [2024-11-15 12:47:52.622198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.423 [2024-11-15 12:47:52.622426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.423 [2024-11-15 12:47:52.622634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.423 [2024-11-15 12:47:52.622653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.423 [2024-11-15 12:47:52.622665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.423 [2024-11-15 12:47:52.622676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.423 [2024-11-15 12:47:52.634810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.423 [2024-11-15 12:47:52.635171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.423 [2024-11-15 12:47:52.635212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.423 [2024-11-15 12:47:52.635228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.423 [2024-11-15 12:47:52.635476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.423 [2024-11-15 12:47:52.635683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.423 [2024-11-15 12:47:52.635701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.423 [2024-11-15 12:47:52.635713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.423 [2024-11-15 12:47:52.635748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.423 [2024-11-15 12:47:52.647890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.423 [2024-11-15 12:47:52.648278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.423 [2024-11-15 12:47:52.648305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.423 [2024-11-15 12:47:52.648320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.423 [2024-11-15 12:47:52.648563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.423 [2024-11-15 12:47:52.648842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.423 [2024-11-15 12:47:52.648863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.423 [2024-11-15 12:47:52.648876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.423 [2024-11-15 12:47:52.648888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.423 [2024-11-15 12:47:52.661159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.423 [2024-11-15 12:47:52.661489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.423 [2024-11-15 12:47:52.661566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.423 [2024-11-15 12:47:52.661582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.423 [2024-11-15 12:47:52.661858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.423 [2024-11-15 12:47:52.662083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.423 [2024-11-15 12:47:52.662116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.423 [2024-11-15 12:47:52.662128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.423 [2024-11-15 12:47:52.662140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.423 [2024-11-15 12:47:52.674370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.423 [2024-11-15 12:47:52.674831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.423 [2024-11-15 12:47:52.674859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.423 [2024-11-15 12:47:52.674875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.423 [2024-11-15 12:47:52.675101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.423 [2024-11-15 12:47:52.675308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.423 [2024-11-15 12:47:52.675327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.423 [2024-11-15 12:47:52.675339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.423 [2024-11-15 12:47:52.675350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.423 [2024-11-15 12:47:52.687576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.423 [2024-11-15 12:47:52.687984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.423 [2024-11-15 12:47:52.688024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.423 [2024-11-15 12:47:52.688041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.423 [2024-11-15 12:47:52.688275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.423 [2024-11-15 12:47:52.688468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.423 [2024-11-15 12:47:52.688485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.423 [2024-11-15 12:47:52.688497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.423 [2024-11-15 12:47:52.688509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.423 [2024-11-15 12:47:52.700641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.423 [2024-11-15 12:47:52.700977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.423 [2024-11-15 12:47:52.701004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.423 [2024-11-15 12:47:52.701024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.423 [2024-11-15 12:47:52.701246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.423 [2024-11-15 12:47:52.701454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.423 [2024-11-15 12:47:52.701472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.423 [2024-11-15 12:47:52.701483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.423 [2024-11-15 12:47:52.701494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.423 [2024-11-15 12:47:52.713760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.423 [2024-11-15 12:47:52.714123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.423 [2024-11-15 12:47:52.714166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.423 [2024-11-15 12:47:52.714181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.423 [2024-11-15 12:47:52.714433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.423 [2024-11-15 12:47:52.714640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.423 [2024-11-15 12:47:52.714658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.423 [2024-11-15 12:47:52.714669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.423 [2024-11-15 12:47:52.714680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.423 [2024-11-15 12:47:52.726866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.423 [2024-11-15 12:47:52.727237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.423 [2024-11-15 12:47:52.727280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.423 [2024-11-15 12:47:52.727295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.423 [2024-11-15 12:47:52.727541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.424 [2024-11-15 12:47:52.727758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.424 [2024-11-15 12:47:52.727778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.424 [2024-11-15 12:47:52.727790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.424 [2024-11-15 12:47:52.727801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.424 [2024-11-15 12:47:52.739989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.424 [2024-11-15 12:47:52.740348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.424 [2024-11-15 12:47:52.740389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.424 [2024-11-15 12:47:52.740404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.424 [2024-11-15 12:47:52.740651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.424 [2024-11-15 12:47:52.740895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.424 [2024-11-15 12:47:52.740915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.424 [2024-11-15 12:47:52.740927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.424 [2024-11-15 12:47:52.740939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.424 [2024-11-15 12:47:52.753081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.424 [2024-11-15 12:47:52.753470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.424 [2024-11-15 12:47:52.753497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.424 [2024-11-15 12:47:52.753512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.424 [2024-11-15 12:47:52.753744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.424 [2024-11-15 12:47:52.753958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.424 [2024-11-15 12:47:52.753977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.424 [2024-11-15 12:47:52.753989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.424 [2024-11-15 12:47:52.754001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.682 [2024-11-15 12:47:52.766523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.682 [2024-11-15 12:47:52.766900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.682 [2024-11-15 12:47:52.766989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.682 [2024-11-15 12:47:52.767005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.682 [2024-11-15 12:47:52.767242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.682 [2024-11-15 12:47:52.767450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.682 [2024-11-15 12:47:52.767468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.682 [2024-11-15 12:47:52.767480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.682 [2024-11-15 12:47:52.767490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.683 [2024-11-15 12:47:52.779687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.683 [2024-11-15 12:47:52.780067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.683 [2024-11-15 12:47:52.780096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.683 [2024-11-15 12:47:52.780112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.683 [2024-11-15 12:47:52.780353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.683 [2024-11-15 12:47:52.780561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.683 [2024-11-15 12:47:52.780579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.683 [2024-11-15 12:47:52.780596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.683 [2024-11-15 12:47:52.780607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.683 [2024-11-15 12:47:52.792982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.683 [2024-11-15 12:47:52.793457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.683 [2024-11-15 12:47:52.793510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.683 [2024-11-15 12:47:52.793524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.683 [2024-11-15 12:47:52.793797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.683 [2024-11-15 12:47:52.794015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.683 [2024-11-15 12:47:52.794036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.683 [2024-11-15 12:47:52.794050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.683 [2024-11-15 12:47:52.794062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.683 [2024-11-15 12:47:52.806249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.683 [2024-11-15 12:47:52.806548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.683 [2024-11-15 12:47:52.806589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.683 [2024-11-15 12:47:52.806603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.683 [2024-11-15 12:47:52.806837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.683 [2024-11-15 12:47:52.807079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.683 [2024-11-15 12:47:52.807097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.683 [2024-11-15 12:47:52.807109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.683 [2024-11-15 12:47:52.807120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.683 [2024-11-15 12:47:52.819366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.683 [2024-11-15 12:47:52.819790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.683 [2024-11-15 12:47:52.819818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.683 [2024-11-15 12:47:52.819834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.683 [2024-11-15 12:47:52.820074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.683 [2024-11-15 12:47:52.820282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.683 [2024-11-15 12:47:52.820300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.683 [2024-11-15 12:47:52.820312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.683 [2024-11-15 12:47:52.820323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.683 [2024-11-15 12:47:52.832331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.683 [2024-11-15 12:47:52.832674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.683 [2024-11-15 12:47:52.832702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.683 [2024-11-15 12:47:52.832726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.683 [2024-11-15 12:47:52.832986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.683 [2024-11-15 12:47:52.833195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.683 [2024-11-15 12:47:52.833213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.683 [2024-11-15 12:47:52.833225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.683 [2024-11-15 12:47:52.833236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.683 [2024-11-15 12:47:52.845429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.683 [2024-11-15 12:47:52.845791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.683 [2024-11-15 12:47:52.845835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.683 [2024-11-15 12:47:52.845851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.683 [2024-11-15 12:47:52.846103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.683 [2024-11-15 12:47:52.846309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.683 [2024-11-15 12:47:52.846327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.683 [2024-11-15 12:47:52.846338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.683 [2024-11-15 12:47:52.846349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.683 [2024-11-15 12:47:52.858685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.683 [2024-11-15 12:47:52.859052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.683 [2024-11-15 12:47:52.859078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.683 [2024-11-15 12:47:52.859094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.683 [2024-11-15 12:47:52.859328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.683 [2024-11-15 12:47:52.859536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.683 [2024-11-15 12:47:52.859554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.683 [2024-11-15 12:47:52.859567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.683 [2024-11-15 12:47:52.859578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.683 [2024-11-15 12:47:52.871903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.683 [2024-11-15 12:47:52.872329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.683 [2024-11-15 12:47:52.872356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.683 [2024-11-15 12:47:52.872392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.683 [2024-11-15 12:47:52.872631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.683 [2024-11-15 12:47:52.872886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.683 [2024-11-15 12:47:52.872906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.683 [2024-11-15 12:47:52.872919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.683 [2024-11-15 12:47:52.872931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.683 [2024-11-15 12:47:52.884941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.683 [2024-11-15 12:47:52.885270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.683 [2024-11-15 12:47:52.885297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.683 [2024-11-15 12:47:52.885312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.683 [2024-11-15 12:47:52.885532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.683 [2024-11-15 12:47:52.885780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.683 [2024-11-15 12:47:52.885800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.683 [2024-11-15 12:47:52.885813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.683 [2024-11-15 12:47:52.885825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.683 [2024-11-15 12:47:52.898149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.683 [2024-11-15 12:47:52.898510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.683 [2024-11-15 12:47:52.898552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.683 [2024-11-15 12:47:52.898567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.683 [2024-11-15 12:47:52.898846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.683 [2024-11-15 12:47:52.899066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.683 [2024-11-15 12:47:52.899100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.683 [2024-11-15 12:47:52.899113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.684 [2024-11-15 12:47:52.899124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.684 [2024-11-15 12:47:52.911473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.684 [2024-11-15 12:47:52.911920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.684 [2024-11-15 12:47:52.911971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.684 [2024-11-15 12:47:52.911986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.684 [2024-11-15 12:47:52.912228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.684 [2024-11-15 12:47:52.912425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.684 [2024-11-15 12:47:52.912443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.684 [2024-11-15 12:47:52.912455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.684 [2024-11-15 12:47:52.912465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.684 [2024-11-15 12:47:52.924842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.684 [2024-11-15 12:47:52.925251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.684 [2024-11-15 12:47:52.925292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.684 [2024-11-15 12:47:52.925308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.684 [2024-11-15 12:47:52.925548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.684 [2024-11-15 12:47:52.925793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.684 [2024-11-15 12:47:52.925820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.684 [2024-11-15 12:47:52.925833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.684 [2024-11-15 12:47:52.925845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.684 [2024-11-15 12:47:52.937899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.684 [2024-11-15 12:47:52.938326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.684 [2024-11-15 12:47:52.938353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.684 [2024-11-15 12:47:52.938384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.684 [2024-11-15 12:47:52.938622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.684 [2024-11-15 12:47:52.938843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.684 [2024-11-15 12:47:52.938862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.684 [2024-11-15 12:47:52.938874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.684 [2024-11-15 12:47:52.938886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.684 [2024-11-15 12:47:52.951030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.684 [2024-11-15 12:47:52.951443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.684 [2024-11-15 12:47:52.951483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.684 [2024-11-15 12:47:52.951499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.684 [2024-11-15 12:47:52.951761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.684 [2024-11-15 12:47:52.951973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.684 [2024-11-15 12:47:52.951992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.684 [2024-11-15 12:47:52.952009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.684 [2024-11-15 12:47:52.952020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.684 [2024-11-15 12:47:52.964119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.684 [2024-11-15 12:47:52.964478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.684 [2024-11-15 12:47:52.964520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.684 [2024-11-15 12:47:52.964535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.684 [2024-11-15 12:47:52.964775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.684 [2024-11-15 12:47:52.964974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.684 [2024-11-15 12:47:52.964993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.684 [2024-11-15 12:47:52.965005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.684 [2024-11-15 12:47:52.965016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.684 [2024-11-15 12:47:52.977272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.684 [2024-11-15 12:47:52.977634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.684 [2024-11-15 12:47:52.977676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.684 [2024-11-15 12:47:52.977692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.684 [2024-11-15 12:47:52.977954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.684 [2024-11-15 12:47:52.978164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.684 [2024-11-15 12:47:52.978182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.684 [2024-11-15 12:47:52.978194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.684 [2024-11-15 12:47:52.978205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.684 [2024-11-15 12:47:52.990289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.684 [2024-11-15 12:47:52.990712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.684 [2024-11-15 12:47:52.990745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.684 [2024-11-15 12:47:52.990776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.684 [2024-11-15 12:47:52.991015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.684 [2024-11-15 12:47:52.991222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.684 [2024-11-15 12:47:52.991240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.684 [2024-11-15 12:47:52.991252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.684 [2024-11-15 12:47:52.991263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.684 [2024-11-15 12:47:53.003397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.684 [2024-11-15 12:47:53.003734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.684 [2024-11-15 12:47:53.003761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.684 [2024-11-15 12:47:53.003777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.684 [2024-11-15 12:47:53.003998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.684 [2024-11-15 12:47:53.004206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.684 [2024-11-15 12:47:53.004224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.684 [2024-11-15 12:47:53.004236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.684 [2024-11-15 12:47:53.004247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.684 [2024-11-15 12:47:53.016552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.684 [2024-11-15 12:47:53.017051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.684 [2024-11-15 12:47:53.017093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.684 [2024-11-15 12:47:53.017110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.684 [2024-11-15 12:47:53.017359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.684 [2024-11-15 12:47:53.017566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.684 [2024-11-15 12:47:53.017584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.684 [2024-11-15 12:47:53.017596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.684 [2024-11-15 12:47:53.017606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.944 [2024-11-15 12:47:53.029936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.944 [2024-11-15 12:47:53.030277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-11-15 12:47:53.030305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.944 [2024-11-15 12:47:53.030321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.944 [2024-11-15 12:47:53.030562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.944 [2024-11-15 12:47:53.030814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.944 [2024-11-15 12:47:53.030835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.944 [2024-11-15 12:47:53.030847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.944 [2024-11-15 12:47:53.030859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.944 [2024-11-15 12:47:53.043093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.944 [2024-11-15 12:47:53.043579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-11-15 12:47:53.043605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.944 [2024-11-15 12:47:53.043641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.944 [2024-11-15 12:47:53.043882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.944 [2024-11-15 12:47:53.044093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.944 [2024-11-15 12:47:53.044112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.944 [2024-11-15 12:47:53.044124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.944 [2024-11-15 12:47:53.044135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.944 [2024-11-15 12:47:53.056204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.944 [2024-11-15 12:47:53.056680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-11-15 12:47:53.056735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.944 [2024-11-15 12:47:53.056752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.944 [2024-11-15 12:47:53.057012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.944 [2024-11-15 12:47:53.057204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.944 [2024-11-15 12:47:53.057222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.944 [2024-11-15 12:47:53.057234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.944 [2024-11-15 12:47:53.057245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.944 [2024-11-15 12:47:53.069252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.944 [2024-11-15 12:47:53.069613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-11-15 12:47:53.069656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.944 [2024-11-15 12:47:53.069672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.944 [2024-11-15 12:47:53.069935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.944 [2024-11-15 12:47:53.070145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.944 [2024-11-15 12:47:53.070164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.944 [2024-11-15 12:47:53.070175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.944 [2024-11-15 12:47:53.070186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.944 [2024-11-15 12:47:53.082339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.944 [2024-11-15 12:47:53.082755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-11-15 12:47:53.082804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.944 [2024-11-15 12:47:53.082819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.944 [2024-11-15 12:47:53.083069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.944 [2024-11-15 12:47:53.083265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.944 [2024-11-15 12:47:53.083283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.944 [2024-11-15 12:47:53.083295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.944 [2024-11-15 12:47:53.083306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.944 [2024-11-15 12:47:53.095353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.944 [2024-11-15 12:47:53.095716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-11-15 12:47:53.095751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.944 [2024-11-15 12:47:53.095766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.944 [2024-11-15 12:47:53.096006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.944 [2024-11-15 12:47:53.096214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.944 [2024-11-15 12:47:53.096233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.944 [2024-11-15 12:47:53.096245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.944 [2024-11-15 12:47:53.096256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.944 [2024-11-15 12:47:53.108433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.944 [2024-11-15 12:47:53.108793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.944 [2024-11-15 12:47:53.108840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.944 [2024-11-15 12:47:53.108861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.944 [2024-11-15 12:47:53.109126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.944 [2024-11-15 12:47:53.109317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.944 [2024-11-15 12:47:53.109335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.944 [2024-11-15 12:47:53.109347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.944 [2024-11-15 12:47:53.109358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.945 [2024-11-15 12:47:53.121490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.945 [2024-11-15 12:47:53.121873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-11-15 12:47:53.121913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.945 [2024-11-15 12:47:53.121929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.945 [2024-11-15 12:47:53.122149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.945 [2024-11-15 12:47:53.122359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.945 [2024-11-15 12:47:53.122377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.945 [2024-11-15 12:47:53.122394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.945 [2024-11-15 12:47:53.122406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1131251 Killed "${NVMF_APP[@]}" "$@" 00:26:12.945 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:12.945 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:12.945 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:12.945 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:12.945 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:12.945 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1132205 00:26:12.945 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:12.945 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1132205 00:26:12.945 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1132205 ']' 00:26:12.945 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.945 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:12.945 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.945 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:12.945 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:12.945 [2024-11-15 12:47:53.135135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.945 [2024-11-15 12:47:53.135518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-11-15 12:47:53.135546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.945 [2024-11-15 12:47:53.135561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.945 [2024-11-15 12:47:53.135796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.945 [2024-11-15 12:47:53.136029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.945 [2024-11-15 12:47:53.136048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.945 [2024-11-15 12:47:53.136061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.945 [2024-11-15 12:47:53.136072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.945 [2024-11-15 12:47:53.148596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.945 [2024-11-15 12:47:53.148985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-11-15 12:47:53.149014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.945 [2024-11-15 12:47:53.149030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.945 [2024-11-15 12:47:53.149258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.945 [2024-11-15 12:47:53.149491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.945 [2024-11-15 12:47:53.149518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.945 [2024-11-15 12:47:53.149531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.945 [2024-11-15 12:47:53.149543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.945 [2024-11-15 12:47:53.162064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.945 [2024-11-15 12:47:53.162530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-11-15 12:47:53.162558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.945 [2024-11-15 12:47:53.162573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.945 [2024-11-15 12:47:53.162810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.945 [2024-11-15 12:47:53.163044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.945 [2024-11-15 12:47:53.163063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.945 [2024-11-15 12:47:53.163075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.945 [2024-11-15 12:47:53.163087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.945 [2024-11-15 12:47:53.175578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.945 [2024-11-15 12:47:53.175967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-11-15 12:47:53.175995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.945 [2024-11-15 12:47:53.176012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.945 [2024-11-15 12:47:53.176240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.945 [2024-11-15 12:47:53.176461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.945 [2024-11-15 12:47:53.176480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.945 [2024-11-15 12:47:53.176493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.945 [2024-11-15 12:47:53.176505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.945 [2024-11-15 12:47:53.181822] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:26:12.945 [2024-11-15 12:47:53.181886] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.945 [2024-11-15 12:47:53.189151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.945 [2024-11-15 12:47:53.189532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-11-15 12:47:53.189560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.945 [2024-11-15 12:47:53.189576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.945 [2024-11-15 12:47:53.189799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.945 [2024-11-15 12:47:53.190041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.945 [2024-11-15 12:47:53.190076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.945 [2024-11-15 12:47:53.190090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.945 [2024-11-15 12:47:53.190102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.945 [2024-11-15 12:47:53.202568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.945 [2024-11-15 12:47:53.202966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-11-15 12:47:53.202994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.945 [2024-11-15 12:47:53.203010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.945 [2024-11-15 12:47:53.203238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.945 [2024-11-15 12:47:53.203459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.945 [2024-11-15 12:47:53.203478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.945 [2024-11-15 12:47:53.203490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.945 [2024-11-15 12:47:53.203502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.945 [2024-11-15 12:47:53.216160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.945 [2024-11-15 12:47:53.216495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.945 [2024-11-15 12:47:53.216521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.945 [2024-11-15 12:47:53.216536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.945 [2024-11-15 12:47:53.216785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.945 [2024-11-15 12:47:53.216997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.945 [2024-11-15 12:47:53.217017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.945 [2024-11-15 12:47:53.217030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.945 [2024-11-15 12:47:53.217042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.945 [2024-11-15 12:47:53.229678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.945 [2024-11-15 12:47:53.230033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-11-15 12:47:53.230076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.946 [2024-11-15 12:47:53.230093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.946 [2024-11-15 12:47:53.230315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.946 [2024-11-15 12:47:53.230519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.946 [2024-11-15 12:47:53.230539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.946 [2024-11-15 12:47:53.230551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.946 [2024-11-15 12:47:53.230568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.946 [2024-11-15 12:47:53.243242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.946 [2024-11-15 12:47:53.243581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-11-15 12:47:53.243608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.946 [2024-11-15 12:47:53.243623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.946 [2024-11-15 12:47:53.243869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.946 [2024-11-15 12:47:53.244098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.946 [2024-11-15 12:47:53.244118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.946 [2024-11-15 12:47:53.244130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.946 [2024-11-15 12:47:53.244142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.946 [2024-11-15 12:47:53.256816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.946 [2024-11-15 12:47:53.257264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-11-15 12:47:53.257292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.946 [2024-11-15 12:47:53.257308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.946 [2024-11-15 12:47:53.257549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.946 [2024-11-15 12:47:53.257797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.946 [2024-11-15 12:47:53.257819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.946 [2024-11-15 12:47:53.257833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.946 [2024-11-15 12:47:53.257847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.946 [2024-11-15 12:47:53.260159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:12.946 [2024-11-15 12:47:53.270487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.946 [2024-11-15 12:47:53.271014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-11-15 12:47:53.271059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.946 [2024-11-15 12:47:53.271093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.946 [2024-11-15 12:47:53.271337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.946 [2024-11-15 12:47:53.271545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.946 [2024-11-15 12:47:53.271564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.946 [2024-11-15 12:47:53.271578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.946 [2024-11-15 12:47:53.271592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.946 [2024-11-15 12:47:53.284307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.946 [2024-11-15 12:47:53.284756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.946 [2024-11-15 12:47:53.284807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:12.946 [2024-11-15 12:47:53.284825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:12.946 [2024-11-15 12:47:53.285060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:12.946 [2024-11-15 12:47:53.285311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.946 [2024-11-15 12:47:53.285347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.946 [2024-11-15 12:47:53.285363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.946 [2024-11-15 12:47:53.285378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.206 [2024-11-15 12:47:53.297948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.207 [2024-11-15 12:47:53.298337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.207 [2024-11-15 12:47:53.298374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.207 [2024-11-15 12:47:53.298390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.207 [2024-11-15 12:47:53.298631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.207 [2024-11-15 12:47:53.298866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.207 [2024-11-15 12:47:53.298888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.207 [2024-11-15 12:47:53.298901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.207 [2024-11-15 12:47:53.298914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.207 [2024-11-15 12:47:53.311349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.207 [2024-11-15 12:47:53.311685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.207 [2024-11-15 12:47:53.311744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.207 [2024-11-15 12:47:53.311772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.207 [2024-11-15 12:47:53.311987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.207 [2024-11-15 12:47:53.312220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.207 [2024-11-15 12:47:53.312239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.207 [2024-11-15 12:47:53.312251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.207 [2024-11-15 12:47:53.312263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.207 [2024-11-15 12:47:53.323759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.207 [2024-11-15 12:47:53.323794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.207 [2024-11-15 12:47:53.323818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.207 [2024-11-15 12:47:53.323848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.207 [2024-11-15 12:47:53.323866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.207 [2024-11-15 12:47:53.324882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.207 [2024-11-15 12:47:53.325278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.207 [2024-11-15 12:47:53.325305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.207 [2024-11-15 12:47:53.325321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.207 [2024-11-15 12:47:53.325470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.207 [2024-11-15 12:47:53.325562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.207 [2024-11-15 12:47:53.325532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:13.207 [2024-11-15 12:47:53.325536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.207 [2024-11-15 12:47:53.325803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.207 [2024-11-15 12:47:53.325824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.207 [2024-11-15 12:47:53.325837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.207 [2024-11-15 12:47:53.325850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.207 [2024-11-15 12:47:53.338472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.207 [2024-11-15 12:47:53.338974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.207 [2024-11-15 12:47:53.339023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.207 [2024-11-15 12:47:53.339042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.207 [2024-11-15 12:47:53.339281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.207 [2024-11-15 12:47:53.339490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.207 [2024-11-15 12:47:53.339510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.207 [2024-11-15 12:47:53.339525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.207 [2024-11-15 12:47:53.339540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.207 [2024-11-15 12:47:53.352068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.207 [2024-11-15 12:47:53.352609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.207 [2024-11-15 12:47:53.352647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.207 [2024-11-15 12:47:53.352668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.207 [2024-11-15 12:47:53.352903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.207 [2024-11-15 12:47:53.353151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.207 [2024-11-15 12:47:53.353172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.207 [2024-11-15 12:47:53.353198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.207 [2024-11-15 12:47:53.353214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.207 [2024-11-15 12:47:53.365549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.207 [2024-11-15 12:47:53.366104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.207 [2024-11-15 12:47:53.366143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.207 [2024-11-15 12:47:53.366162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.207 [2024-11-15 12:47:53.366413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.207 [2024-11-15 12:47:53.366621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.207 [2024-11-15 12:47:53.366642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.207 [2024-11-15 12:47:53.366656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.207 [2024-11-15 12:47:53.366671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.207 [2024-11-15 12:47:53.379071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.207 [2024-11-15 12:47:53.379605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.207 [2024-11-15 12:47:53.379640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.207 [2024-11-15 12:47:53.379659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.207 [2024-11-15 12:47:53.379901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.207 [2024-11-15 12:47:53.380149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.207 [2024-11-15 12:47:53.380169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.207 [2024-11-15 12:47:53.380183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.207 [2024-11-15 12:47:53.380197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.207 [2024-11-15 12:47:53.392656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.207 [2024-11-15 12:47:53.393199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.207 [2024-11-15 12:47:53.393237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.207 [2024-11-15 12:47:53.393256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.207 [2024-11-15 12:47:53.393495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.207 [2024-11-15 12:47:53.393745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.207 [2024-11-15 12:47:53.393766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.207 [2024-11-15 12:47:53.393782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.207 [2024-11-15 12:47:53.393797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.207 [2024-11-15 12:47:53.406206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.207 [2024-11-15 12:47:53.406678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.207 [2024-11-15 12:47:53.406715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.207 [2024-11-15 12:47:53.406744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.207 [2024-11-15 12:47:53.406982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.207 [2024-11-15 12:47:53.407205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.207 [2024-11-15 12:47:53.407225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.207 [2024-11-15 12:47:53.407240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.207 [2024-11-15 12:47:53.407256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.207 [2024-11-15 12:47:53.419833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.208 [2024-11-15 12:47:53.420159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.208 [2024-11-15 12:47:53.420203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.208 [2024-11-15 12:47:53.420219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.208 [2024-11-15 12:47:53.420448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.208 [2024-11-15 12:47:53.420668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.208 [2024-11-15 12:47:53.420687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.208 [2024-11-15 12:47:53.420715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.208 [2024-11-15 12:47:53.420737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.208 [2024-11-15 12:47:53.433408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.208 [2024-11-15 12:47:53.433771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.208 [2024-11-15 12:47:53.433800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.208 [2024-11-15 12:47:53.433816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.208 [2024-11-15 12:47:53.434029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.208 [2024-11-15 12:47:53.434258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.208 [2024-11-15 12:47:53.434279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.208 [2024-11-15 12:47:53.434292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.208 [2024-11-15 12:47:53.434304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.208 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.208 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:13.208 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:13.208 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:13.208 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:13.208 [2024-11-15 12:47:53.446900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.208 [2024-11-15 12:47:53.447306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.208 [2024-11-15 12:47:53.447344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.208 [2024-11-15 12:47:53.447360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.208 [2024-11-15 12:47:53.447589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.208 [2024-11-15 12:47:53.447832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.208 [2024-11-15 12:47:53.447855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.208 [2024-11-15 12:47:53.447871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.208 [2024-11-15 12:47:53.447885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.208 [2024-11-15 12:47:53.460407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.208 [2024-11-15 12:47:53.460821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.208 [2024-11-15 12:47:53.460849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.208 [2024-11-15 12:47:53.460865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.208 [2024-11-15 12:47:53.461094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.208 [2024-11-15 12:47:53.461314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.208 [2024-11-15 12:47:53.461333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.208 [2024-11-15 12:47:53.461346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.208 [2024-11-15 12:47:53.461357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.208 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.208 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:13.208 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.208 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:13.208 [2024-11-15 12:47:53.473834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.208 [2024-11-15 12:47:53.474244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.208 [2024-11-15 12:47:53.474272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.208 [2024-11-15 12:47:53.474288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.208 [2024-11-15 12:47:53.474516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.208 [2024-11-15 12:47:53.474763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.208 [2024-11-15 12:47:53.474785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.208 [2024-11-15 12:47:53.474798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.208 [2024-11-15 12:47:53.474816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.208 [2024-11-15 12:47:53.475248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.208 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.208 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:13.208 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.208 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:13.208 [2024-11-15 12:47:53.487342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.208 [2024-11-15 12:47:53.487734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.208 [2024-11-15 12:47:53.487763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.208 [2024-11-15 12:47:53.487786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.208 [2024-11-15 12:47:53.488014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.208 [2024-11-15 12:47:53.488220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.208 [2024-11-15 12:47:53.488240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.208 [2024-11-15 12:47:53.488253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.208 [2024-11-15 12:47:53.488266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.208 [2024-11-15 12:47:53.500762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.208 [2024-11-15 12:47:53.501157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.208 [2024-11-15 12:47:53.501201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.208 [2024-11-15 12:47:53.501218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.208 [2024-11-15 12:47:53.501475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.208 [2024-11-15 12:47:53.501679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.208 [2024-11-15 12:47:53.501714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.208 [2024-11-15 12:47:53.501738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.208 [2024-11-15 12:47:53.501753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.208 [2024-11-15 12:47:53.514340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.208 [2024-11-15 12:47:53.514760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.208 [2024-11-15 12:47:53.514788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.208 [2024-11-15 12:47:53.514805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.208 [2024-11-15 12:47:53.515019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.208 [2024-11-15 12:47:53.515239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.208 [2024-11-15 12:47:53.515258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.208 [2024-11-15 12:47:53.515281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.208 [2024-11-15 12:47:53.515293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.208 Malloc0 00:26:13.208 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.208 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:13.209 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.209 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:13.209 [2024-11-15 12:47:53.528022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.209 [2024-11-15 12:47:53.528378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.209 [2024-11-15 12:47:53.528408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.209 [2024-11-15 12:47:53.528425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.209 [2024-11-15 12:47:53.528640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.209 [2024-11-15 12:47:53.528894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.209 [2024-11-15 12:47:53.528916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.209 [2024-11-15 12:47:53.528931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.209 [2024-11-15 12:47:53.528945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.209 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.209 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:13.209 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.209 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:13.209 3744.17 IOPS, 14.63 MiB/s [2024-11-15T11:47:53.553Z] 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.209 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.209 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.209 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:13.209 [2024-11-15 12:47:53.541607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.209 [2024-11-15 12:47:53.541989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.209 [2024-11-15 12:47:53.542018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1218a40 with addr=10.0.0.2, port=4420 00:26:13.209 [2024-11-15 12:47:53.542034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1218a40 is same with the state(6) to be set 00:26:13.209 [2024-11-15 12:47:53.542261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218a40 (9): Bad file descriptor 00:26:13.209 [2024-11-15 12:47:53.542481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.209 [2024-11-15 12:47:53.542501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.209 [2024-11-15 12:47:53.542514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.209 [2024-11-15 12:47:53.542531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.209 [2024-11-15 12:47:53.543087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.209 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.209 12:47:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1131539 00:26:13.467 [2024-11-15 12:47:53.555293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.467 [2024-11-15 12:47:53.624321] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:15.333 4335.86 IOPS, 16.94 MiB/s [2024-11-15T11:47:56.609Z] 4889.12 IOPS, 19.10 MiB/s [2024-11-15T11:47:57.981Z] 5304.33 IOPS, 20.72 MiB/s [2024-11-15T11:47:58.912Z] 5645.70 IOPS, 22.05 MiB/s [2024-11-15T11:47:59.854Z] 5924.55 IOPS, 23.14 MiB/s [2024-11-15T11:48:00.786Z] 6159.33 IOPS, 24.06 MiB/s [2024-11-15T11:48:01.718Z] 6353.38 IOPS, 24.82 MiB/s [2024-11-15T11:48:02.708Z] 6528.43 IOPS, 25.50 MiB/s 00:26:22.364 Latency(us) 00:26:22.364 [2024-11-15T11:48:02.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.364 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:22.364 Verification LBA range: start 0x0 length 0x4000 00:26:22.364 Nvme1n1 : 15.00 6671.12 26.06 10219.40 0.00 7555.67 567.37 25049.32 00:26:22.364 [2024-11-15T11:48:02.708Z] =================================================================================================================== 00:26:22.364 [2024-11-15T11:48:02.708Z] Total : 6671.12 26.06 10219.40 0.00 7555.67 567.37 25049.32 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:22.636 rmmod nvme_tcp 00:26:22.636 rmmod nvme_fabrics 00:26:22.636 rmmod nvme_keyring 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1132205 ']' 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1132205 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1132205 ']' 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1132205 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1132205 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1132205' 00:26:22.636 killing process with pid 1132205 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1132205 00:26:22.636 12:48:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1132205 00:26:22.911 12:48:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:22.911 12:48:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:22.911 12:48:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:22.911 12:48:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:22.911 12:48:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:22.911 12:48:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:22.911 12:48:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:22.911 12:48:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:22.911 12:48:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:22.911 12:48:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.911 12:48:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.911 12:48:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:25.456 00:26:25.456 real 0m22.608s 00:26:25.456 user 0m59.120s 00:26:25.456 sys 0m4.830s 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.456 ************************************ 00:26:25.456 END TEST nvmf_bdevperf 00:26:25.456 ************************************ 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.456 ************************************ 00:26:25.456 START TEST nvmf_target_disconnect 00:26:25.456 ************************************ 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:25.456 * Looking for test storage... 00:26:25.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:25.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.456 --rc genhtml_branch_coverage=1 00:26:25.456 --rc genhtml_function_coverage=1 00:26:25.456 --rc genhtml_legend=1 00:26:25.456 --rc geninfo_all_blocks=1 00:26:25.456 --rc geninfo_unexecuted_blocks=1 00:26:25.456 00:26:25.456 ' 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:25.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.456 --rc genhtml_branch_coverage=1 00:26:25.456 --rc genhtml_function_coverage=1 00:26:25.456 --rc genhtml_legend=1 00:26:25.456 --rc geninfo_all_blocks=1 00:26:25.456 --rc geninfo_unexecuted_blocks=1 00:26:25.456 00:26:25.456 ' 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:25.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.456 --rc genhtml_branch_coverage=1 00:26:25.456 --rc genhtml_function_coverage=1 00:26:25.456 --rc genhtml_legend=1 00:26:25.456 --rc geninfo_all_blocks=1 00:26:25.456 --rc geninfo_unexecuted_blocks=1 00:26:25.456 00:26:25.456 ' 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:25.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.456 --rc genhtml_branch_coverage=1 00:26:25.456 --rc genhtml_function_coverage=1 00:26:25.456 --rc genhtml_legend=1 00:26:25.456 --rc geninfo_all_blocks=1 00:26:25.456 --rc geninfo_unexecuted_blocks=1 00:26:25.456 00:26:25.456 ' 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.456 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:25.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:25.457 12:48:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:27.358 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:27.358 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:26:27.358 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:27.359 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:27.359 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:27.359 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:27.359 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:27.359 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:27.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:26:27.359 00:26:27.359 --- 10.0.0.2 ping statistics --- 00:26:27.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.359 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:26:27.360 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:26:27.360 00:26:27.360 --- 10.0.0.1 ping statistics --- 00:26:27.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.360 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:26:27.360 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.360 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:26:27.360 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:27.618 ************************************ 00:26:27.618 START TEST nvmf_target_disconnect_tc1 00:26:27.618 ************************************ 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:27.618 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:27.618 [2024-11-15 12:48:07.832282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.618 [2024-11-15 12:48:07.832361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x70df40 with addr=10.0.0.2, port=4420 00:26:27.618 [2024-11-15 12:48:07.832408] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:27.618 [2024-11-15 12:48:07.832427] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:27.618 [2024-11-15 12:48:07.832440] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:26:27.618 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:27.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:27.619 Initializing NVMe Controllers 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:27.619 00:26:27.619 real 0m0.096s 00:26:27.619 user 0m0.046s 00:26:27.619 sys 0m0.050s 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:27.619 ************************************ 00:26:27.619 END TEST nvmf_target_disconnect_tc1 00:26:27.619 ************************************ 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:27.619 ************************************ 00:26:27.619 START TEST nvmf_target_disconnect_tc2 00:26:27.619 ************************************ 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1135798 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1135798 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1135798 ']' 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.619 12:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:27.619 [2024-11-15 12:48:07.948172] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:26:27.619 [2024-11-15 12:48:07.948258] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.877 [2024-11-15 12:48:08.026803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:27.877 [2024-11-15 12:48:08.087697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.877 [2024-11-15 12:48:08.087782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.877 [2024-11-15 12:48:08.087812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.877 [2024-11-15 12:48:08.087823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.877 [2024-11-15 12:48:08.087833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.877 [2024-11-15 12:48:08.089346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:27.877 [2024-11-15 12:48:08.089405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:27.877 [2024-11-15 12:48:08.089468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:27.877 [2024-11-15 12:48:08.089471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.135 Malloc0 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.135 [2024-11-15 12:48:08.309471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.135 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.135 [2024-11-15 12:48:08.337755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.136 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.136 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:28.136 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.136 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.136 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.136 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1136027 00:26:28.136 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:28.136 12:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:30.036 12:48:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1135798 00:26:30.036 12:48:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 [2024-11-15 12:48:10.368191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Read completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.036 starting I/O failed 00:26:30.036 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 [2024-11-15 12:48:10.368598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 [2024-11-15 12:48:10.368935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Read completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 Write completed with error (sct=0, sc=8) 00:26:30.037 starting I/O failed 00:26:30.037 [2024-11-15 12:48:10.369284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:30.037 [2024-11-15 12:48:10.369545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.037 [2024-11-15 12:48:10.369591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.037 qpair failed and we were unable to recover it. 00:26:30.037 [2024-11-15 12:48:10.369744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.037 [2024-11-15 12:48:10.369784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.037 qpair failed and we were unable to recover it. 00:26:30.037 [2024-11-15 12:48:10.369883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.037 [2024-11-15 12:48:10.369910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.037 qpair failed and we were unable to recover it. 00:26:30.037 [2024-11-15 12:48:10.369992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.037 [2024-11-15 12:48:10.370026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.037 qpair failed and we were unable to recover it. 00:26:30.037 [2024-11-15 12:48:10.370151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.037 [2024-11-15 12:48:10.370177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.037 qpair failed and we were unable to recover it. 00:26:30.037 [2024-11-15 12:48:10.370320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.037 [2024-11-15 12:48:10.370346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.037 qpair failed and we were unable to recover it. 00:26:30.037 [2024-11-15 12:48:10.370533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.037 [2024-11-15 12:48:10.370569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.037 qpair failed and we were unable to recover it. 00:26:30.037 [2024-11-15 12:48:10.370712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.037 [2024-11-15 12:48:10.370744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.037 qpair failed and we were unable to recover it. 00:26:30.037 [2024-11-15 12:48:10.370842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.037 [2024-11-15 12:48:10.370874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.037 qpair failed and we were unable to recover it. 00:26:30.037 [2024-11-15 12:48:10.370976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.371003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.371122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.371148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.371283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.371322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.371423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.371451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.371554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.371579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.371693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.371725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.371825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.371850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.371938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.371963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.372081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.372108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.372237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.372262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.372352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.372378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.372495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.372521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.372663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.372689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.372830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.372857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.372979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.373007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.373096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.373121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.373239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.373266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.373357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.373383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.373471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.373496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.373614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.373641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.373783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.373809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.373934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.373960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.374106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.374133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.374252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.374280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.374426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.374452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.374563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.374590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.374685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.374712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.374829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.374854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.374968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.374995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.375084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.375109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.375220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.375246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.375340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.375365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.375500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.375526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.375610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.375635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.375716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.375749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.375911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.375937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.376076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.376101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.038 [2024-11-15 12:48:10.376213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.038 [2024-11-15 12:48:10.376239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.038 qpair failed and we were unable to recover it. 00:26:30.039 [2024-11-15 12:48:10.376354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.039 [2024-11-15 12:48:10.376379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.039 qpair failed and we were unable to recover it. 00:26:30.039 [2024-11-15 12:48:10.376471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.039 [2024-11-15 12:48:10.376502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.039 qpair failed and we were unable to recover it. 00:26:30.039 [2024-11-15 12:48:10.376644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.039 [2024-11-15 12:48:10.376692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.039 qpair failed and we were unable to recover it. 00:26:30.039 [2024-11-15 12:48:10.376828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.039 [2024-11-15 12:48:10.376867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.039 qpair failed and we were unable to recover it. 00:26:30.039 [2024-11-15 12:48:10.376999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.039 [2024-11-15 12:48:10.377026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.039 qpair failed and we were unable to recover it. 00:26:30.039 [2024-11-15 12:48:10.377170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.039 [2024-11-15 12:48:10.377196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.039 qpair failed and we were unable to recover it. 00:26:30.039 [2024-11-15 12:48:10.377285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.039 [2024-11-15 12:48:10.377311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.039 qpair failed and we were unable to recover it. 00:26:30.039 [2024-11-15 12:48:10.377431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.039 [2024-11-15 12:48:10.377458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.039 qpair failed and we were unable to recover it. 00:26:30.039 [2024-11-15 12:48:10.377581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.039 [2024-11-15 12:48:10.377608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.039 qpair failed and we were unable to recover it. 00:26:30.039 [2024-11-15 12:48:10.377727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.039 [2024-11-15 12:48:10.377753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.039 qpair failed and we were unable to recover it. 00:26:30.039 [2024-11-15 12:48:10.377867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.039 [2024-11-15 12:48:10.377892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.039 qpair failed and we were unable to recover it. 00:26:30.039 [2024-11-15 12:48:10.378006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.039 [2024-11-15 12:48:10.378039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.039 qpair failed and we were unable to recover it. 00:26:30.039 [2024-11-15 12:48:10.378156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.039 [2024-11-15 12:48:10.378181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.039 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.378277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.378303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.378412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.378440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.378558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.378584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.378699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.378736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.378859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.378886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.379021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.379047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.379129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.379155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.379294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.379321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.379406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.379432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.379553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.379580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.379697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.379734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.379864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.379895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.379996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.380028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.380154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.380187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.380316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.380342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.334 [2024-11-15 12:48:10.380480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.334 [2024-11-15 12:48:10.380523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:30.334 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.380704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.380739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.380831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.380857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.380977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.381003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.381116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.381143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.381239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.381265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.381353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.381381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.381495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.381520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.381641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.381678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.381792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.381820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.381940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.381965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.382085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.382118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.382212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.382238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.382322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.382347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.382462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.382487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.382569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.382596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.382672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.382698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.382819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.382845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.382959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.382985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.383070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.383097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.383239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.383265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.383387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.383414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.383505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.383531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.383683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.383731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.383886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.383913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.384026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.384051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.384163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.384188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.335 [2024-11-15 12:48:10.384291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.335 [2024-11-15 12:48:10.384316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.335 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.384403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.384427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.384510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.384534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.384664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.384703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.384841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.384869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.384984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.385011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.385124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.385151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.385237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.385264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.385340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.385366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.385444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.385470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.385605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.385631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.385749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.385778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.385864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.385890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.386003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.386036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.386154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.386182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.386306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.386332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.386423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.386448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.386561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.386587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.386671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.386698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.386846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.386872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.386990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.387016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.387122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.387148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.387302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.387327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.387439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.387466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.387555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.387582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.387670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.387695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.387825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.387853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.387988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.388025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.388173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.388199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.388321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.388346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.388488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.388513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.388666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.388691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.388816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.388841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.388954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.388980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.389093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.389118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.389232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.389258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.389370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.389396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.336 [2024-11-15 12:48:10.389510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.336 [2024-11-15 12:48:10.389536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.336 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.389623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.389649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.389800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.389827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.389945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.389975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.390095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.390122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.390230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.390256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.390394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.390420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.390506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.390535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.390628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.390659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.390755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.390782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.390907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.390933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.391018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.391043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.391156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.391182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.391290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.391318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.391428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.391453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.391567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.391593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.391678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.391704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.391861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.391887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.391975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.392002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.392111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.392138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.392224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.392250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.392332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.392358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.392473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.392500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.392612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.392639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.392753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.392780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.392881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.392908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.393049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.393076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.393190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.393217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.393357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.393383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.393498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.393524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.393611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.393638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.337 [2024-11-15 12:48:10.393727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.337 [2024-11-15 12:48:10.393753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.337 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.393836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.393863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.393985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.394012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.394124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.394150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.394264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.394290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.394404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.394430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.394515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.394541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.394636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.394674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.394788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.394816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.394905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.394930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.395057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.395083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.395240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.395290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.395460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.395491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.395605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.395630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.395737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.395764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.395880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.395906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.396049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.396077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.396167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.396193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.396285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.396310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.396421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.396447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.396589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.396615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.396744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.396784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.396914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.396942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.397085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.397110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.397202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.397229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.397323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.397348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.397448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.397474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.397583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.397611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.397731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.397761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.397902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.397928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.398013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.398038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.398122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.398147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.338 qpair failed and we were unable to recover it. 00:26:30.338 [2024-11-15 12:48:10.398267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.338 [2024-11-15 12:48:10.398292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.398412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.398439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.398529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.398554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.398666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.398691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.398809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.398835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.398945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.398970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.399053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.399080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.399220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.399250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.399367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.399393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.399532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.399558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.399699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.399732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.399823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.399851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.399966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.399992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.400124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.400168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.400301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.400330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.400451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.400478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.400602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.400628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.400748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.400776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.400865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.400891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.400976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.401003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.401118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.401143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.401262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.401289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.401405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.401431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.401546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.401572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.401685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.401711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.401823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.401849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.401962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.401989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.402102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.402128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.402210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.402236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.402351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.402379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.402499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.402526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.339 [2024-11-15 12:48:10.402645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.339 [2024-11-15 12:48:10.402682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.339 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.402814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.402842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.402957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.402983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.403126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.403152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.403241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.403267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.403402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.403448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.403563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.403589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.403678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.403706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.403832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.403858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.403993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.404019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.404102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.404128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.404238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.404264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.404369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.404395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.404540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.404566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.404689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.404715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.404837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.404863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.404938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.404969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.405080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.405106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.405217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.405242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.405320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.405346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.405429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.405455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.405569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.405595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.405678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.405704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.405818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.405844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.405952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.405978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.406092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.406118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.406230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.406256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.406367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.406393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.406491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.406519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.406632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.406659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.406781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.406809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.406919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.406946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.407035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.407061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.407172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.407198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.407316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.407344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.407495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.407522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.407664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.407690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.340 qpair failed and we were unable to recover it. 00:26:30.340 [2024-11-15 12:48:10.407779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.340 [2024-11-15 12:48:10.407807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.407934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.407960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.408074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.408100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.408219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.408246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.408373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.408399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.408516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.408542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.408693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.408737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.408883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.408909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.409050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.409077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.409192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.409218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.409310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.409336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.409461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.409487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.409631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.409657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.409795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.409822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.409912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.409938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.410053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.410078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.410167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.410193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.410306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.410331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.410444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.410470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.410585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.410615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.410700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.410732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.410875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.410900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.411042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.411067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.411153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.411178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.411288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.411313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.411425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.411452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.411563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.411589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.411679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.411705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.411826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.411851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.411940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.411967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.412110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.412136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.412221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.412247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.412397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.412421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.412510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.412538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.412621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.412647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.412769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.412796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.412906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.412931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.413013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.341 [2024-11-15 12:48:10.413038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.341 qpair failed and we were unable to recover it. 00:26:30.341 [2024-11-15 12:48:10.413181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.413206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.413323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.413348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.413465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.413491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.413605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.413629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.413754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.413793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.413918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.413945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.414055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.414082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.414195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.414221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.414317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.414344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.414460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.414486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.414602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.414628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.414745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.414773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.414859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.414885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.414992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.415018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.415144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.415170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.415317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.415343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.415430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.415457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.415596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.415622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.415752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.415779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.415896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.415922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.416026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.416052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.416172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.416203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.416294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.416322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.416437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.416462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.416573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.416599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.416748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.416773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.416866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.416891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.417004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.417029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.417118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.417144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.417280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.417307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.417433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.417458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.417578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.417606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.417755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.417783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.417873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.417899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.418018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.418045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.418166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.418191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.418300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.418326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.418446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.342 [2024-11-15 12:48:10.418473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.342 qpair failed and we were unable to recover it. 00:26:30.342 [2024-11-15 12:48:10.418584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.418609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.418729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.418755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.418871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.418896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.418984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.419009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.419118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.419143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.419219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.419244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.419333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.419358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.419431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.419457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.419598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.419623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.419733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.419758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.419877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.419903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.420022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.420047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.420128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.420154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.420289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.420314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.420430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.420457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.420568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.420593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.420682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.420709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.420857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.420882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.421019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.421044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.421164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.421190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.421303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.421328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.421440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.421467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.421550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.421576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.421704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.421752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.421875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.421901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.422011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.422037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.422158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.422184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.422263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.422290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.422430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.422479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.422593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.422621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.422736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.422762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.422874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.422900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.423011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.423061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.423178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.423203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.423317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.423343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.423464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.423489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.423602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.423627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.423716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.343 [2024-11-15 12:48:10.423747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.343 qpair failed and we were unable to recover it. 00:26:30.343 [2024-11-15 12:48:10.423830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.423856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.423950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.423975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.424087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.424113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.424230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.424255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.424345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.424372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.424487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.424513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.424655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.424681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.424808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.424834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.424975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.425000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.425120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.425145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.425258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.425284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.425401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.425427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.425517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.425546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.425685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.425711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.425838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.425865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.426007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.426033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.426114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.426139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.426280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.426327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.426440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.426466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.426580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.426606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.426757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.426784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.426867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.426893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.426980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.427006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.427148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.427174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.427246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.427272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.427413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.427448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.427537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.427563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.427682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.427709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.427805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.427831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.427916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.427941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.428036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.428062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.428200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.428227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.344 qpair failed and we were unable to recover it. 00:26:30.344 [2024-11-15 12:48:10.428339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.344 [2024-11-15 12:48:10.428364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.428478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.428506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.428643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.428670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.428814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.428841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.428990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.429050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.429135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.429161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.429286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.429332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.429455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.429481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.429600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.429626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.429713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.429748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.429862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.429889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.430027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.430054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.430170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.430196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.430314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.430342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.430425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.430451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.430568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.430594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.430689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.430714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.430837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.430862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.430947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.430973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.431123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.431149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.431288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.431322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.431410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.431435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.431581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.431609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.431743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.431770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.431862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.431888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.432042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.432068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.432176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.432202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.432342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.432392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.432479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.432505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.432648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.432674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.432764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.432792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.432912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.432938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.433022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.433049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.433161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.433187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.433340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.433367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.433482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.433508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.433624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.433650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.433758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.433785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.345 qpair failed and we were unable to recover it. 00:26:30.345 [2024-11-15 12:48:10.433877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.345 [2024-11-15 12:48:10.433903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.434033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.434059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.434179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.434205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.434347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.434373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.434489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.434515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.434654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.434681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.434832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.434858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.434973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.434999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.435113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.435139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.435292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.435318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.435408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.435435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.435550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.435577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.435667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.435693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.435817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.435843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.435943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.435982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.436096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.436123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.436239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.436264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.436388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.436413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.436500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.436525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.436669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.436695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.436789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.436817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.436929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.436955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.437075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.437105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.437197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.437224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.437314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.437341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.437453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.437480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.437563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.437591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.437696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.437733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.437826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.437852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.437935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.437960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.438076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.438102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.438212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.438237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.438358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.438384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.346 [2024-11-15 12:48:10.438492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.346 [2024-11-15 12:48:10.438516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.346 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.438629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.438655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.438772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.438800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.438949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.438975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.439085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.439111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.439225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.439252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.439367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.439393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.439531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.439557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.439646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.439673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.439800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.439826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.439937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.439963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.440112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.440138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.440251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.440276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.440393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.440419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.440561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.440586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.440708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.440740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.440864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.440890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.440980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.441005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.441090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.441115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.441227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.441253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.441386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.441415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.441541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.441567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.441686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.441712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.441842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.441869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.441979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.442005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.442148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.442174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.442291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.442317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.442451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.442477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.442590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.442616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.442712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.442751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.442869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.442895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.442985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.443012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.443099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.443124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.443229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.347 [2024-11-15 12:48:10.443255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.347 qpair failed and we were unable to recover it. 00:26:30.347 [2024-11-15 12:48:10.443364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.443389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.443500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.443528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.443612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.443638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.443728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.443755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.443866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.443892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.444006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.444033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.444142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.444168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.444311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.444338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.444479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.444505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.444625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.444651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.444791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.444819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.444928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.444954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.445036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.445061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.445172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.445198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.445307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.445332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.445442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.445467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.445610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.445637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.445730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.445756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.445876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.445902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.446042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.446068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.446180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.446206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.446324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.446351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.446481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.446507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.446651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.446677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.446823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.446850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.446993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.447020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.447139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.447165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.447251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.447277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.447395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.447421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.447564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.447591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.447673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.447698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.447824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.447852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.447969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.447995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.448139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.448165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.448252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.448278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.448394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.448424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.348 [2024-11-15 12:48:10.448567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.348 [2024-11-15 12:48:10.448593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.348 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.448733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.448760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.448848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.448875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.448954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.448980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.449168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.449216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.449305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.449331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.449449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.449475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.449613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.449638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.449748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.449774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.449905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.449931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.450084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.450110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.450192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.450218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.450326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.450352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.450434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.450461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.450589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.450614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.450736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.450762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.450877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.450905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.451023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.451073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.451265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.451290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.451378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.451403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.451481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.451507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.451593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.451619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.451737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.451763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.451872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.451898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.452013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.452038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.452115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.452141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.452220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.349 [2024-11-15 12:48:10.452247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.349 qpair failed and we were unable to recover it. 00:26:30.349 [2024-11-15 12:48:10.452381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.452406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.452523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.452548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.452661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.452686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.452804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.452830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.452943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.452969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.453077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.453102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.453210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.453235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.453377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.453402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.453491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.453518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.453660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.453685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.453815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.453843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.453964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.453990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.454127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.454157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.454287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.454313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.454431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.454457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.454547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.454573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.454681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.454707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.454831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.454857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.454945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.454971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.455107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.455133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.455219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.455245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.455358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.455384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.455476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.455502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.455611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.455637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.455748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.455775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.455883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.455909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.456039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.456065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.456144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.456170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.456287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.456313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.456418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.456444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.456562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.456589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.456728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.456755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.456846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.456872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.456989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.457015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.350 qpair failed and we were unable to recover it. 00:26:30.350 [2024-11-15 12:48:10.457129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.350 [2024-11-15 12:48:10.457155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.457292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.457318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.457435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.457461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.457563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.457602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.457739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.457767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.457897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.457923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.458010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.458035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.458153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.458177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.458292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.458320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.458434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.458460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.458606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.458631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.458725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.458751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.458867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.458893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.458983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.459009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.459130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.459156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.459271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.459296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.459379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.459404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.459485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.459510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.459626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.459657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.459731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.459757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.459873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.459899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.459991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.460016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.460127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.460153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.460254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.460279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.460362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.460388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.460463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.460488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.460627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.460653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.460774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.460800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.460911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.460938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.461051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.461077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.461186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.461212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.461326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.461351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.461463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.461488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.461600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.351 [2024-11-15 12:48:10.461625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.351 qpair failed and we were unable to recover it. 00:26:30.351 [2024-11-15 12:48:10.461737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.461763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.461854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.461880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.461954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.461979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.462123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.462148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.462235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.462261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.462348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.462373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.462480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.462505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.462619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.462644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.462752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.462778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.462866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.462891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.463031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.463057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.463161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.463186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.463276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.463300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.463407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.463432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.463518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.463543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.463660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.463684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.463785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.463811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.463899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.463926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.464007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.464034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.464147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.464172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.464321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.464359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.464508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.464535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.464622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.464650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.464784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.464811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.464898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.464930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.465047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.465073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.465189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.465215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.465301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.465327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.465468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.465494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.465614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.465640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.465733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.465761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.465846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.352 [2024-11-15 12:48:10.465873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.352 qpair failed and we were unable to recover it. 00:26:30.352 [2024-11-15 12:48:10.465992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.466018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.466203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.466257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.466365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.466391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.466531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.466559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.466648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.466673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.466789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.466815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.466967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.466994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.467174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.467223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.467355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.467403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.467514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.467541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.467660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.467685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.467778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.467803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.467912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.467937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.468051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.468077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.468191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.468217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.468300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.468325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.468469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.468495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.468569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.468594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.468687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.468715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.468851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.468878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.469019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.469046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.469235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.469261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.469401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.469427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.469539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.469565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.469654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.469680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.469769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.469796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.469934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.469959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.470047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.470073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.470192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.470218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.470323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.470349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.470439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.470466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.470586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.470611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.470756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.470787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.470899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.470925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.471038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.471063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.471202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.471228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.353 [2024-11-15 12:48:10.471336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.353 [2024-11-15 12:48:10.471361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.353 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.471446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.471471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.471612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.471637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.471724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.471749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.471828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.471852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.471933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.471959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.472038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.472064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.472181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.472207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.472329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.472354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.472471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.472497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.472615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.472640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.472757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.472783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.472926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.472952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.473096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.473121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.473232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.473258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.473344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.473369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.473507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.473533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.473648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.473674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.473802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.473828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.473948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.473973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.474057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.474083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.474189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.354 [2024-11-15 12:48:10.474214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.354 qpair failed and we were unable to recover it. 00:26:30.354 [2024-11-15 12:48:10.474329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.474354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.474451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.474477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.474565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.474591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.474706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.474736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.474845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.474870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.474976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.475002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.475115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.475139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.475258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.475283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.475369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.475394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.475532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.475557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.475639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.475664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.475803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.475829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.475969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.475994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.476112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.476138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.476253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.476281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.476388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.476413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.476551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.476576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.476660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.476686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.476807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.476833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.476925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.476950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.477058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.477084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.477191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.477216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.477298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.477324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.477464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.477489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.477608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.477633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.477775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.477800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.477941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.477966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.478058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.478083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.478204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.478229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.478345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.478370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.478485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.478511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.478616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.478642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.478784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.478810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.478904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.478931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.479050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.479075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.355 [2024-11-15 12:48:10.479189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.355 [2024-11-15 12:48:10.479216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.355 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.479306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.479332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.479442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.479467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.479553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.479579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.479661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.479687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.479843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.479869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.479991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.480030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.480120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.480148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.480301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.480327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.480444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.480470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.480612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.480639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.480745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.480772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.480865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.480893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.481078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.481127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.481309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.481353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.481468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.481494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.481606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.481633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.481728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.481755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.481839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.481864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.481980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.482013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.482134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.482160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.482281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.482309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.482424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.482451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.482568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.482595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.482740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.482767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.482880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.482906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.483017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.483043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.483155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.483181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.483296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.483322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.483463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.483489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.483581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.483606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.483700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.483737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.483823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.483851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.483973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.484001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.484127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.484153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.484293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.484318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.484398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.484423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.356 qpair failed and we were unable to recover it. 00:26:30.356 [2024-11-15 12:48:10.484567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.356 [2024-11-15 12:48:10.484593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.484705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.484737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.484848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.484875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.484992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.485019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.485101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.485127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.485267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.485295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.485415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.485442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.485559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.485585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.485701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.485737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.485874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.485909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.486026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.486065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.486189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.486215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.486326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.486353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.486439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.486464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.486574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.486601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.486742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.486768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.486906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.486932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.487016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.487042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.487203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.487252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.487450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.487476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.487616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.487643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.487761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.487791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.487933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.487983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.488129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.488155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.488274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.488300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.488383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.488408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.488520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.488546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.488658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.488684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.488780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.488807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.488891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.488918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.489069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.489095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.489208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.489234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.489311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.489337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.489454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.489480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.489569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.489594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.489708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.489743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.489828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.489854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.357 [2024-11-15 12:48:10.489959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.357 [2024-11-15 12:48:10.489985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.357 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.490097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.490123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.490238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.490266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.490380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.490406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.490523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.490549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.490637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.490663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.490802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.490830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.490939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.490964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.491052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.491080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.491168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.491194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.491309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.491336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.491456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.491482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.491625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.491655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.491773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.491800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.491917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.491943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.492022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.492048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.492186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.492212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.492294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.492321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.492409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.492435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.492525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.492551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.492669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.492695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.492832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.492867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.493005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.493042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.493137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.493164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.493246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.493273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.493357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.493382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.493506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.493545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.493633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.493661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.493783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.493810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.493926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.493952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.494081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.494132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.494238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.494289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.494431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.494457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.494571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.494598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.494715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.494750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.494892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.494918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.494998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.495025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.495141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.358 [2024-11-15 12:48:10.495167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.358 qpair failed and we were unable to recover it. 00:26:30.358 [2024-11-15 12:48:10.495282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.495308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.495455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.495481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.495596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.495621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.495708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.495746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.495859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.495889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.496006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.496032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.496169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.496218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.496330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.496356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.496468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.496494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.496573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.496599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.496684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.496711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.496816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.496842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.496924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.496949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.497038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.497065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.497184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.497216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.497360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.497386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.497509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.497537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.497616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.497642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.497735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.497762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.497901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.497949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.498096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.498142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.498325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.498351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.498428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.498454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.498560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.498586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.498684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.498731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.498934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.498983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.499086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.499124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.499217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.499243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.499394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.499420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.499534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.499559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.499684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.499712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.359 [2024-11-15 12:48:10.499844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.359 [2024-11-15 12:48:10.499871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.359 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.499955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.499983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.500102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.500148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.500260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.500307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.500382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.500408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.500525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.500551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.500696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.500743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.500860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.500887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.500972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.501000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.501094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.501121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.501236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.501262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.501374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.501400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.501520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.501547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.501659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.501685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.501780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.501818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.501911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.501937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.502019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.502046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.502164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.502190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.502270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.502296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.502413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.502438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.502554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.502582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.502701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.502734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.502876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.502902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.503007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.503038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.503131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.503157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.503271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.503297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.503385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.503411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.503521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.503547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.503685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.503711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.503811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.503837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.503964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.504001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.504092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.504117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.504207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.504235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.504358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.504384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.504471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.504497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.504571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.504597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.504739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.504766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.504852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.504878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.360 [2024-11-15 12:48:10.504968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.360 [2024-11-15 12:48:10.504995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.360 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.505104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.505130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.505243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.505269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.505349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.505375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.505490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.505516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.505609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.505634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.505722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.505749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.505823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.505848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.505933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.505959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.506048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.506074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.506194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.506223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.506334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.506360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.506462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.506500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.506601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.506629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.506726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.506753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.506900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.506936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.507064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.507089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.507245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.507281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.507430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.507465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.507587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.507612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.507753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.507779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.507902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.507930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.508066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.508116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.508280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.508307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.508449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.508475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.508594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.508622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.508725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.508751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.508872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.508897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.508987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.509013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.509096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.509120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.509214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.509242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.509431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.509484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.509577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.509603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.509713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.509745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.509829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.509855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.510007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.510056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.510197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.510248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.510394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.510441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.361 qpair failed and we were unable to recover it. 00:26:30.361 [2024-11-15 12:48:10.510555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.361 [2024-11-15 12:48:10.510581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.510695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.510727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.510811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.510838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.510980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.511006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.511176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.511226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.511342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.511368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.511444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.511470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.511620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.511647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.511762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.511788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.511876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.511902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.512012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.512063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.512141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.512168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.512258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.512284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.512397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.512423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.512576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.512619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.512744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.512771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.512903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.512940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.513058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.513085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.513192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.513229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.513379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.513429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.513538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.513564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.513649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.513674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.513769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.513797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.513941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.513966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.514077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.514103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.514187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.514212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.514334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.514363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.514470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.514497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.514589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.514615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.514689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.514715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.514830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.514856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.514938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.514965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.515090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.515115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.515234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.515260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.515354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.515380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.515465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.515491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.515572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.515599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.515709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.515747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.362 qpair failed and we were unable to recover it. 00:26:30.362 [2024-11-15 12:48:10.515885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.362 [2024-11-15 12:48:10.515911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.516020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.516045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.516130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.516157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.516252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.516279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.516399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.516425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.516570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.516596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.516710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.516745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.516827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.516853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.516971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.516998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.517081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.517107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.517214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.517240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.517351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.517389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.517539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.517565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.517654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.517679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.517797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.517823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.517933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.517958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.518068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.518100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.518185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.518211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.518322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.518348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.518458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.518483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.518575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.518601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.518687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.518712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.518797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.518823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.518901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.518927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.519064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.519090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.519205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.519230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.519351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.519379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.519467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.519493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.519577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.519603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.519686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.519711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.519911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.519964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.520093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.520142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.520222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.520249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.520335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.520360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.520445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.520471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.520575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.520601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.520695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.520727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.520813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.520839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.363 qpair failed and we were unable to recover it. 00:26:30.363 [2024-11-15 12:48:10.520939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.363 [2024-11-15 12:48:10.520967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.521086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.521113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.521224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.521250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.521364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.521390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.521479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.521506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.521654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.521680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.521779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.521807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.521895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.521920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.522061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.522087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.522170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.522195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.522305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.522331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.522444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.522469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.522582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.522610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.522715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.522784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.522978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.523017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.523170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.523208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.523438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.523474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.523599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.523634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.523785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.523812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.523959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.524009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.524150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.524200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.524324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.524362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.524488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.524514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.524631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.524657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.524850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.524876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.525014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.525040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.525131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.525158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.525286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.525335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.525472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.525498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.525585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.525611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.525731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.525758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.525851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.525876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.525978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.526004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.364 [2024-11-15 12:48:10.526119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.364 [2024-11-15 12:48:10.526145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.364 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.526238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.526263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.526346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.526373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.526482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.526508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.526618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.526644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.526751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.526778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.526915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.526941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.527017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.527044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.527152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.527179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.527256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.527282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.527385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.527411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.527554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.527580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.527664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.527694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.527800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.527839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.527966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.527992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.528104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.528130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.528246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.528271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.528412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.528437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.528552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.528578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.528690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.528715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.528817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.528844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.528930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.528956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.529069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.529094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.529210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.529235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.529315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.529341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.529461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.529489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.529613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.529640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.529739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.529766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.529904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.529931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.530077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.530104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.530243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.530269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.530383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.530409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.530517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.530543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.530651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.530677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.530798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.530824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.530931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.530957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.531087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.531113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.531224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.531249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.365 [2024-11-15 12:48:10.531379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.365 [2024-11-15 12:48:10.531404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.365 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.531530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.531556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.531705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.531737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.531868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.531894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.532005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.532032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.532183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.532209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.532324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.532350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.532465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.532491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.532583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.532610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.532730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.532757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.532892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.532918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.533060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.533085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.533194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.533220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.533327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.533353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.533459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.533489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.533583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.533609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.533729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.533757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.533899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.533925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.534078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.534103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.534217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.534243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.534383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.534409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.534521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.534546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.534660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.534686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.534817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.534843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.534961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.534987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.535100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.535126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.535256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.535281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.535391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.535417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.535536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.535562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.535683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.535709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.535811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.535837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.535955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.535982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.536125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.536151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.536240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.536266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.536355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.536381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.366 [2024-11-15 12:48:10.536503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.366 [2024-11-15 12:48:10.536540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.366 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.536659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.536687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.536825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.536852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.536966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.536993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.537127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.537174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.537348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.537400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.537515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.537541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.537661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.537687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.537815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.537841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.538006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.538058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.538233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.538284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.538494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.538538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.538652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.538679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.538825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.538851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.538992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.539018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.539196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.539231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.539399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.539425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.539537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.539562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.539677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.539709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.539831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.539862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.539951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.539977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.540114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.540161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.540280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.540328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.540463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.540508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.540648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.540674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.540805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.540831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.540943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.540970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.541079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.541105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.541193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.541219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.541339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.541365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.541512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.541540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.541617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.541643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.541754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.541781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.541926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.541952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.542033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.367 [2024-11-15 12:48:10.542058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.367 qpair failed and we were unable to recover it. 00:26:30.367 [2024-11-15 12:48:10.542169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.542193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.542278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.542303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.542383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.542408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.542548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.542574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.542691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.542728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.542845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.542871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.543013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.543064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.543234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.543269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.543415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.543466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.543604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.543630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.543755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.543795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.543930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.543958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.544047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.544074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.544204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.544253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.544395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.544443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.544549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.544575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.544688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.544713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.544861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.544886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.544974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.545000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.545138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.545164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.545278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.545304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.545394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.545420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.545510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.545536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.545623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.545651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.545769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.545801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.545891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.545917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.546028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.546054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.546166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.546192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.546279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.546307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.546396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.546422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.546533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.546559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.546704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.546736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.546825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.368 [2024-11-15 12:48:10.546852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.368 qpair failed and we were unable to recover it. 00:26:30.368 [2024-11-15 12:48:10.546941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.546968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.547088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.547116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.547233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.547259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.547349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.547375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.547489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.547515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.547631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.547658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.547747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.547773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.547862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.547889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.548012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.548038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.548146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.548172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.548286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.548312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.548420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.548447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.548561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.548587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.548737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.548763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.548856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.548883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.549026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.549053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.549145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.549171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.549288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.549314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.549444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.549471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.549613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.549642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.549758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.549785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.549877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.549903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.549988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.550015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.550130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.550155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.550279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.550305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.550426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.550453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.550565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.550592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.550705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.550738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.550849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.550876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.550993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.551019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.551138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.551164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.551247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.551280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.551395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.551421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.551563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.551589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.551737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.369 [2024-11-15 12:48:10.551764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.369 qpair failed and we were unable to recover it. 00:26:30.369 [2024-11-15 12:48:10.551900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.551949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.552121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.552169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.552349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.552399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.552505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.552531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.552615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.552641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.552756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.552782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.552887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.552914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.553039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.553065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.553180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.553205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.553316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.553342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.553454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.553480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.553595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.553620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.553712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.553752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.553882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.553908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.554021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.554048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.554164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.554190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.554274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.554301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.554417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.554443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.554581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.554607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.554695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.554729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.554834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.554873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.554964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.554991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.555076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.555104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.555235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.555263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.555377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.555402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.555510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.555536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.555627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.555653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.555745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.555773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.555886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.555912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.556026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.556053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.556204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.556229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.556315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.556341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.556450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.556477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.556587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.556614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.370 [2024-11-15 12:48:10.556727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.370 [2024-11-15 12:48:10.556754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.370 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.556894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.556920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.557033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.557064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.557180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.557206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.557323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.557349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.557490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.557516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.557609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.557636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.557752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.557778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.557917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.557942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.558056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.558083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.558198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.558225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.558363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.558389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.558531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.558557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.558670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.558696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.558797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.558823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.558932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.558958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.559082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.559109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.559198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.559224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.559339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.559365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.559476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.559502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.559591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.559618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.559708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.559743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.559882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.559908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.560017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.560042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.560127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.560154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.560271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.560297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.560409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.560435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.560516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.560542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.560663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.560702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.560843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.560879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.561018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.561081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.561266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.561302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.561466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.561501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.561646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.561680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.561808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.561836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.561954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.561980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.562062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.562088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.371 [2024-11-15 12:48:10.562257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.371 [2024-11-15 12:48:10.562307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.371 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.562419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.562444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.562591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.562617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.562729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.562756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.562869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.562895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.563009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.563039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.563122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.563149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.563231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.563258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.563375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.563402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.563517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.563543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.563662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.563688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.563849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.563877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.564016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.564042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.564141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.564168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.564251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.564278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.564374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.564400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.564477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.564503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.564615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.564641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.564761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.564788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.564910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.564936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.565061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.565087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.565196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.565222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.565304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.565331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.565418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.565444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.565554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.565580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.565715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.565760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.565842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.565867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.565998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.566024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.566135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.566161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.566302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.566328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.566425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.566463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.566592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.566631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.566804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.566839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.567028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.567070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.567281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.567323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.567465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.567498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.567639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.567667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.372 qpair failed and we were unable to recover it. 00:26:30.372 [2024-11-15 12:48:10.567776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.372 [2024-11-15 12:48:10.567804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.567896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.567922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.568040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.568066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.568243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.568270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.568410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.568435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.568546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.568573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.568687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.568713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.568835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.568862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.568945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.568971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.569083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.569110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.569227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.569253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.569359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.569385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.569469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.569496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.569607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.569634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.569746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.569774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.569925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.569963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.570097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.570124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.570246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.570271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.570387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.570412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.570555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.570580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.570668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.570692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.570823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.570848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.570946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.570977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.571060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.571084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.571220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.571254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.571409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.571443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.571624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.571687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.571885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.571911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.571996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.572020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.572163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.373 [2024-11-15 12:48:10.572189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.373 qpair failed and we were unable to recover it. 00:26:30.373 [2024-11-15 12:48:10.572368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.572424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.572545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.572572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.572687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.572712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.572803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.572829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.572942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.572968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.573081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.573112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.573254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.573280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.573358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.573382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.573492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.573517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.573627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.573651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.573759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.573784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.573901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.573925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.574035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.574060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.574165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.574198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.574347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.574381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.574578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.574619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.574785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.574825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.574920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.574948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.575030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.575057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.575192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.575243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.575355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.575403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.575546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.575574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.575708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.575744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.575828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.575854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.575960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.575986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.576094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.576145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.576282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.576325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.576469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.576516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.576600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.576626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.576708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.576742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.576858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.576884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.576992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.577018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.577132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.577163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.577280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.577307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.577409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.577435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.577544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.577571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.577686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.577712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.577837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.577863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.374 qpair failed and we were unable to recover it. 00:26:30.374 [2024-11-15 12:48:10.577978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.374 [2024-11-15 12:48:10.578004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.578116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.578143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.578252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.578278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.578417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.578443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.578532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.578558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.578646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.578672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.578765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.578792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.578931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.578957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.579076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.579102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.579212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.579238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.579349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.579375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.579494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.579520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.579606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.579633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.579760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.579800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.579948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.579976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.580087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.580113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.580196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.580223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.580304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.580332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.580448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.580474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.580582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.580608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.580727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.580754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.580877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.580903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.581010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.581060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.581166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.581191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.581296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.581321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.581409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.581434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.581541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.581566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.581681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.581706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.581829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.581856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.581967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.581992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.582106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.582133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.582245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.582270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.582381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.582408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.582525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.582549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.582662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.582693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.582817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.582844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.582931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.582957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.583069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.583095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.583181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.375 [2024-11-15 12:48:10.583206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.375 qpair failed and we were unable to recover it. 00:26:30.375 [2024-11-15 12:48:10.583317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.583342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.583438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.583478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.583629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.583668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.583795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.583822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.583937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.583963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.584064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.584098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.584246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.584281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.584421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.584456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.584560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.584603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.584731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.584758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.584868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.584893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.584981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.585006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.585080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.585104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.585231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.585265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.585407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.585440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.585563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.585588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.585704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.585743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.585862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.585887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.585992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.586017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.586187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.586221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.586335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.586360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.586516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.586550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.586693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.586731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.586848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.586873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.586954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.586979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.587121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.587154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.587290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.587338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.587487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.587520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.587684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.587730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.587885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.587912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.588022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.588050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.588193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.588240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.588367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.588414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.588531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.588557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.588663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.588688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.588819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.588859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.589014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.589042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.589118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.589145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.589307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.589355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.589519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.589576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.589727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.376 [2024-11-15 12:48:10.589766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.376 qpair failed and we were unable to recover it. 00:26:30.376 [2024-11-15 12:48:10.589910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.589937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.590056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.590098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.590301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.590335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.590481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.590531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.590706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.590751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.590856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.590881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.590997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.591022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.591157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.591191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.591354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.591395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.591558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.591582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.591669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.591696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.591808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.591833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.591923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.591948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.592056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.592102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.592221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.592247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.592403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.592445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.592677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.592772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.592888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.592913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.593010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.593049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.593170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.593198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.593385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.593445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.593567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.593595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.593747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.593774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.593891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.593918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.594030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.594056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.594199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.594225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.594307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.594335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.594426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.594453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.594604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.594628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.594740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.594765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.594878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.594903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.595011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.595036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.595153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.595177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.595321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.595375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.595519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.595545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.595681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.595712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.595841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.595868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.595983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.596010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.596135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.596170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.596305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.596332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.596481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.596506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.377 qpair failed and we were unable to recover it. 00:26:30.377 [2024-11-15 12:48:10.596615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.377 [2024-11-15 12:48:10.596639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.596729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.596754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.596867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.596891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.597003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.597027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.597118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.597166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.597310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.597345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.597557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.597615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.597708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.597746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.597897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.597924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.598036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.598061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.598198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.598245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.598331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.598358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.598491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.598527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.598676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.598705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.598858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.598885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.598966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.598992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.599075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.599102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.599247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.599273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.599385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.599411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.599528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.599567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.599715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.599749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.599899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.599927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.600103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.600152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.600289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.600339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.600455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.600480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.600615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.600642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.600764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.600791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.600929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.600954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.601095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.601121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.601238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.601264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.601376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.601402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.601520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.601548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.601669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.601695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.601852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.601880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.602021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.602077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.602250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.602297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.378 qpair failed and we were unable to recover it. 00:26:30.378 [2024-11-15 12:48:10.602467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.378 [2024-11-15 12:48:10.602514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.602657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.602684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.602809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.602837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.602951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.602976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.603148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.603194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.603338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.603388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.603504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.603529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.603616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.603645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.603731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.603758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.603849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.603875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.604005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.604054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.604212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.604288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.604500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.604554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.604668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.604695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.604782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.604808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.604947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.604973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.605064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.605089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.605216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.605281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.605454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.605490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.605655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.605680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.605823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.605849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.605964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.605990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.606072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.606098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.606209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.606235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.606344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.606369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.606491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.606520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.606669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.606695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.606825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.606851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.606964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.606990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.607120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.607170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.607279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.607305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.607446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.607472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.607582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.607608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.607777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.607816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.607940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.607968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.608084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.608110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.608220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.608246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.608344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.608369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.608511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.608546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.608659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.608685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.608799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.608825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.608910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.608937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.609076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.609101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.609215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.609241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.379 [2024-11-15 12:48:10.609357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.379 [2024-11-15 12:48:10.609382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.379 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.609465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.609490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.609602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.609627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.609742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.609769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.609914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.609940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.610066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.610104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.610199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.610226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.610344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.610370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.610490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.610514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.610650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.610675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.610782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.610808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.610945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.610971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.611051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.611075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.611248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.611300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.611442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.611467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.611574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.611600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.611683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.611709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.611852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.611879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.611992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.612017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.612171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.612205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.612310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.612344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.612519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.612560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.612705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.612742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.612892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.612917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.613046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.613078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.613219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.613253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.613463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.613510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.613589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.613615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.613753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.613778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.613897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.613923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.614038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.614063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.614151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.614177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.614315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.614341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.614431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.614466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.614586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.614611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.614732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.614757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.614843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.614867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.614947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.614971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.615110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.615135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.615343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.615379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.615548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.615584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.615731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.615767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.615863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.615888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.615970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.615994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.616081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.380 [2024-11-15 12:48:10.616105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.380 qpair failed and we were unable to recover it. 00:26:30.380 [2024-11-15 12:48:10.616280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.616316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.616531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.616567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.616740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.616779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.616877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.616910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.617022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.617048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.617158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.617184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.617267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.617293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.617409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.617434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.617524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.617550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.617641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.617666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.617804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.617844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.617968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.617996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.618159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.618206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.618345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.618390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.618586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.618615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.618705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.618743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.618864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.618891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.619012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.619058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.619182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.619218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.619457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.619493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.619614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.619640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.619752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.619778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.619855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.619880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.619965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.619991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.620078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.620104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.620279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.620315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.620428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.620477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.620627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.620662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.620844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.620885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.620988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.621017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.621175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.621231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.621309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.621336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.621472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.621519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.621635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.621661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.621773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.621802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.621923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.621950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.622090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.622138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.622220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.622246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.622356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.622383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.622469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.622496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.622586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.622613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.622728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.622755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.622863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.622889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.623004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.381 [2024-11-15 12:48:10.623031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.381 qpair failed and we were unable to recover it. 00:26:30.381 [2024-11-15 12:48:10.623178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.623204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.623288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.623313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.623467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.623503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.623651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.623676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.623791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.623816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.623906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.623931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.624056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.624090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.624199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.624232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.624377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.624419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.624606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.624640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.624767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.624796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.624894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.624920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.625037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.625064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.625211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.625246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.625406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.625460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.625561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.625588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.625732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.625758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.625874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.625899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.626027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.626052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.626163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.626189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.626347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.626381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.626523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.626549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.626638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.626663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.626780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.626806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.626885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.626911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.627025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.627051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.627154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.627196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.627349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.627384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.627501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.627538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.627687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.382 [2024-11-15 12:48:10.627712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.382 qpair failed and we were unable to recover it. 00:26:30.382 [2024-11-15 12:48:10.627841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.627867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.627983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.628008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.628104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.628138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.628334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.628368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.628508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.628576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.628726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.628752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.628866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.628891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.628976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.629001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.629101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.629135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.629320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.629354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.629533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.629573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.629728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.629754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.629868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.629894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.630006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.630031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.630104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.630130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.630289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.630324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.630497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.630532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.630652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.630678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.630797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.630824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.630935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.630960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.631046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.631071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.631219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.631254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.631392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.631426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.631587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.631612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.631735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.631761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.631879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.631904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.632025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.632049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.632129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.632155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.632239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.632265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.632443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.632499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.632647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.632673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.632761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.632789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.632879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.632906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.633100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.633156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.633352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.633399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.633513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.633540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.633656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.383 [2024-11-15 12:48:10.633682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.383 qpair failed and we were unable to recover it. 00:26:30.383 [2024-11-15 12:48:10.633795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.633832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.633937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.633965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.634137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.634198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.634344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.634392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.634479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.634507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.634592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.634619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.634729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.634756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.634838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.634864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.634949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.634974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.635090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.635116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.635286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.635334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.635467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.635516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.635599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.635626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.635740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.635768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.635859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.635886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.636035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.636061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.636176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.636202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.636280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.636305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.636386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.636411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.636495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.636520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.636629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.636654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.636750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.636776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.636858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.636883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.636984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.637019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.637192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.637226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.637369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.637403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.637520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.637569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.637742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.637770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.637879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.637904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.638019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.638044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.638163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.638189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.638346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.638382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.638572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.638606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.638768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.638794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.638932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.638958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.639073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.639098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.639182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.639207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.639313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.639347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.384 qpair failed and we were unable to recover it. 00:26:30.384 [2024-11-15 12:48:10.639468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.384 [2024-11-15 12:48:10.639519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.385 qpair failed and we were unable to recover it. 00:26:30.385 [2024-11-15 12:48:10.639666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.385 [2024-11-15 12:48:10.639700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.385 qpair failed and we were unable to recover it. 00:26:30.385 [2024-11-15 12:48:10.639851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.385 [2024-11-15 12:48:10.639877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.385 qpair failed and we were unable to recover it. 00:26:30.385 [2024-11-15 12:48:10.640005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.385 [2024-11-15 12:48:10.640031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.385 qpair failed and we were unable to recover it. 00:26:30.385 [2024-11-15 12:48:10.640119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.385 [2024-11-15 12:48:10.640144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.385 qpair failed and we were unable to recover it. 00:26:30.385 [2024-11-15 12:48:10.640223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.385 [2024-11-15 12:48:10.640248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.385 qpair failed and we were unable to recover it. 00:26:30.385 [2024-11-15 12:48:10.640327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.385 [2024-11-15 12:48:10.640352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.385 qpair failed and we were unable to recover it. 00:26:30.385 [2024-11-15 12:48:10.640447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.668 [2024-11-15 12:48:10.640474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.668 qpair failed and we were unable to recover it. 00:26:30.668 [2024-11-15 12:48:10.640553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.668 [2024-11-15 12:48:10.640578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.668 qpair failed and we were unable to recover it. 00:26:30.668 [2024-11-15 12:48:10.640692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.668 [2024-11-15 12:48:10.640743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.668 qpair failed and we were unable to recover it. 00:26:30.668 [2024-11-15 12:48:10.640857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.668 [2024-11-15 12:48:10.640882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.668 qpair failed and we were unable to recover it. 00:26:30.668 [2024-11-15 12:48:10.640963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.668 [2024-11-15 12:48:10.640988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.668 qpair failed and we were unable to recover it. 00:26:30.668 [2024-11-15 12:48:10.641073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.668 [2024-11-15 12:48:10.641098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.668 qpair failed and we were unable to recover it. 00:26:30.668 [2024-11-15 12:48:10.641177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.668 [2024-11-15 12:48:10.641202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.668 qpair failed and we were unable to recover it. 00:26:30.668 [2024-11-15 12:48:10.641309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.668 [2024-11-15 12:48:10.641343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.668 qpair failed and we were unable to recover it. 00:26:30.668 [2024-11-15 12:48:10.641453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.668 [2024-11-15 12:48:10.641487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.668 qpair failed and we were unable to recover it. 00:26:30.668 [2024-11-15 12:48:10.641639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.668 [2024-11-15 12:48:10.641668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.668 qpair failed and we were unable to recover it. 00:26:30.668 [2024-11-15 12:48:10.641786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.668 [2024-11-15 12:48:10.641812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.668 qpair failed and we were unable to recover it. 00:26:30.668 [2024-11-15 12:48:10.641893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.668 [2024-11-15 12:48:10.641918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.668 qpair failed and we were unable to recover it. 00:26:30.668 [2024-11-15 12:48:10.642043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.668 [2024-11-15 12:48:10.642068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.668 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.642167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.642202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.642317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.642351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.642468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.642501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.642645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.642679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.642826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.642852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.642943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.642968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.643053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.643078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.643161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.643187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.643270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.643296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.643468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.643503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.643625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.643661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.643841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.643867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.643987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.644013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.644126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.644151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.644263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.644288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.644458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.644523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.644658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.644702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.644852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.644878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.644963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.644993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.645101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.645126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.645237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.645262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.645372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.645407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.645550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.645584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.645713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.645765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.645884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.645910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.646035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.646060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.646174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.646200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.646300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.646334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.646452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.646487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.646594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.646629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.646778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.646804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.646887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.646912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.647026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.647075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.647286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.647321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.647449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.647475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.647661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.647696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.647831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.647858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.669 [2024-11-15 12:48:10.648002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.669 [2024-11-15 12:48:10.648027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.669 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.648144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.648169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.648316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.648352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.648532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.648591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.648777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.648804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.648896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.648921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.649017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.649042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.649151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.649177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.649277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.649312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.649416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.649450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.649588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.649622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.649730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.649766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.649875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.649900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.649995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.650020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.650162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.650208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.650318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.650353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.650534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.650568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.650680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.650705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.650812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.650839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.650951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.650976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.651140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.651174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.651318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.651354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.651483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.651517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.651669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.651707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.651841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.651870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.651954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.651981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.652119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.652148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.652291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.652330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.652446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.652473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.652588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.652615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.652700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.652732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.652817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.652843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.652957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.652989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.653094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.653128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.653243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.653277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.653430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.653465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.653644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.653680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.653801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.653829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.670 [2024-11-15 12:48:10.653919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.670 [2024-11-15 12:48:10.653945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.670 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.654052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.654104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.654262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.654326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.654416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.654442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.654540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.654568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.654740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.654785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.654924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.654972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.655083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.655110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.655259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.655286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.655373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.655399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.655499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.655537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.655690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.655727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.655873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.655908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.656048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.656082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.656215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.656249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.656385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.656420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.656600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.656626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.656697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.656739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.656829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.656854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.656987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.657022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.657151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.657185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.657324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.657359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.657552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.657595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.657735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.657761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.657902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.657928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.658048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.658073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.658210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.658236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.658316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.658363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.658478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.658512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.658647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.658687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.658835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.658861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.658978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.659003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.659115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.659140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.659282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.659308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.659456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.659492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.659668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.659703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.659823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.659848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.659950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.659984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.660131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.671 [2024-11-15 12:48:10.660166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.671 qpair failed and we were unable to recover it. 00:26:30.671 [2024-11-15 12:48:10.660283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.660309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.660423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.660457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.660636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.660671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.660847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.660874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.661013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.661065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.661188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.661228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.661426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.661475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.661585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.661613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.661733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.661761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.661931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.661983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.662088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.662125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.662249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.662286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.662402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.662430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.662569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.662595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.662731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.662759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.662898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.662946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.663055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.663107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.663248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.663315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.663430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.663456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.663571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.663609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.663704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.663738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.663880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.663906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.664047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.664073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.664178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.664203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.664319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.664345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.664479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.664531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.664677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.664703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.664832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.664879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.664980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.665008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.665147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.665195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.665278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.665309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.665455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.665481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.665569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.665597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.665685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.665712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.665841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.665869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.665982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.666018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.666163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.666198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.672 qpair failed and we were unable to recover it. 00:26:30.672 [2024-11-15 12:48:10.666312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.672 [2024-11-15 12:48:10.666346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.666505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.666530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.666618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.666643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.666750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.666792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.666927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.666962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.667082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.667117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.667264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.667298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.667431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.667473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.667621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.667656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.667803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.667829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.667968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.667993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.668080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.668105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.668244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.668279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.668390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.668424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.668538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.668572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.668731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.668760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.668882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.668908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.669058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.669114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.669286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.669335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.669431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.669465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.669579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.669606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.669700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.669736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.669856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.669887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.670031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.670057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.670164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.670192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.670273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.670299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.670445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.670478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.670606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.670646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.670742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.670770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.670889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.670915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.671025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.671060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.673 qpair failed and we were unable to recover it. 00:26:30.673 [2024-11-15 12:48:10.671206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.673 [2024-11-15 12:48:10.671241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.671378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.671413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.671580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.671649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.671803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.671835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.671920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.671948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.672067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.672102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.672258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.672293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.672446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.672480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.672595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.672620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.672757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.672783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.672895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.672920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.673001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.673026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.673114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.673139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.673215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.673240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.673388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.673422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.673570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.673604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.673793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.673819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.673961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.673986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.674100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.674125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.674238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.674263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.674410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.674444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.674586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.674620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.674777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.674804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.674945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.674971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.675084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.675109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.675195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.675220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.675365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.675399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.675537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.675571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.675714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.675776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.675865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.675891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.676005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.676035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.676204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.676238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.676350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.676395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.676544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.676578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.676691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.676734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.676864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.676890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.676999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.677025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.677151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.677185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.674 [2024-11-15 12:48:10.677417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.674 [2024-11-15 12:48:10.677451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.674 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.677595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.677630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.677772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.677798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.677908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.677934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.678037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.678063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.678202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.678228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.678352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.678394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.678547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.678581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.678786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.678812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.678920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.678945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.679054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.679079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.679170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.679195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.679337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.679373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.679517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.679552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.679692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.679734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.679865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.679890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.679972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.679997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.680106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.680132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.680259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.680293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.680427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.680461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.680609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.680646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.680787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.680813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.680952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.680977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.681102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.681128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.681239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.681273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.681441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.681476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.681620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.681655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.681828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.681854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.681965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.681991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.682105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.682131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.682241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.682276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.682391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.682425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.682571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.682606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.682763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.682802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.682930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.682960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.683056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.683088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.683261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.683308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.683489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.683534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.683648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.675 [2024-11-15 12:48:10.683675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.675 qpair failed and we were unable to recover it. 00:26:30.675 [2024-11-15 12:48:10.683787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.683813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.683918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.683943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.684059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.684084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.684222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.684256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.684394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.684429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.684572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.684607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.684756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.684785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.684910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.684938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.685083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.685133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.685269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.685319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.685430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.685461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.685548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.685574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.685691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.685732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.685838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.685865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.685973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.686003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.686133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.686161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.686282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.686309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.686445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.686472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.686586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.686613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.686728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.686754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.686861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.686886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.687006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.687041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.687190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.687225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.687339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.687373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.687549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.687583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.687688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.687732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.687864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.687890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.688044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.688079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.688218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.688253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.688404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.688439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.688540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.688575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.688771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.688798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.688879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.688905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.688983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.689009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.689145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.689175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.689253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.689303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.689448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.689483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.689587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.676 [2024-11-15 12:48:10.689619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.676 qpair failed and we were unable to recover it. 00:26:30.676 [2024-11-15 12:48:10.689779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.689805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.689894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.689919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.690031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.690056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.690193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.690219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.690381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.690416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.690531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.690565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.690691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.690715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.690831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.690856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.690968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.690992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.691128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.691162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.691280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.691310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.691454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.691492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.691639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.691665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.691755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.691782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.691926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.691974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.692109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.692157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.692328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.692377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.692518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.692544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.692625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.692652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.692794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.692831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.692976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.693010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.693150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.693185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.693331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.693367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.693543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.693583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.693702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.693743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.693849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.693893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.694037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.694072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.694191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.694225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.694365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.694399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.694543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.694577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.694682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.694716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.694884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.694909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.695021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.695046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.695124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.677 [2024-11-15 12:48:10.695171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.677 qpair failed and we were unable to recover it. 00:26:30.677 [2024-11-15 12:48:10.695346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.695380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.695494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.695528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.695671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.695706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.695854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.695880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.695992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.696017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.696100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.696126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.696245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.696279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.696397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.696422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.696577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.696613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.696749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.696775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.696863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.696888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.696976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.697001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.697097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.697131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.697236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.697271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.697415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.697450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.697581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.697615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.697745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.697786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.697936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.697962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.698101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.698126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.698246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.698271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.698354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.698401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.698545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.698589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.698710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.698742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.698865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.698890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.699002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.699036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.699212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.699246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.699392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.699427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.699571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.699606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.699716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.699771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.699885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.699910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.700072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.700123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.700227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.700286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.700460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.700510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.700624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.700652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.700778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.700805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.700885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.700911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.701053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.701079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.701169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.678 [2024-11-15 12:48:10.701194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.678 qpair failed and we were unable to recover it. 00:26:30.678 [2024-11-15 12:48:10.701284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.701309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.701441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.701475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.701603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.701648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.701763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.701789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.701920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.701954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.702078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.702111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.702277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.702311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.702441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.702475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.702613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.702647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.702797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.702822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.702931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.702957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.703143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.703178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.703317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.703351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.703492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.703526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.703629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.703663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.703831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.703857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.703960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.703994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.704140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.704176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.704322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.704358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.704522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.704576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.704673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.704701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.704867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.704894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.704984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.705010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.705147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.705197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.705346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.705398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.705483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.705509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.705622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.705650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.705741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.705767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.705848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.705874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.705961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.705995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.706081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.706107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.706190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.706216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.706297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.706330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.706441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.706467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.706542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.706570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.706712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.706749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.706878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.706903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.706995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.679 [2024-11-15 12:48:10.707022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.679 qpair failed and we were unable to recover it. 00:26:30.679 [2024-11-15 12:48:10.707161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.707186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.707295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.707320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.707425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.707459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.707570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.707603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.707743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.707786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.707937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.707971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.708075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.708109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.708262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.708297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.708468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.708524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.708634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.708661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.708776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.708803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.708940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.708992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.709190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.709240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.709344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.709380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.709490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.709517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.709659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.709685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.709809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.709863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.709964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.709991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.710135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.710163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.710274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.710323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.710434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.710460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.710579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.710611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.710704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.710740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.710849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.710881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.710977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.711003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.711093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.711118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.711199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.711229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.711349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.711375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.711459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.711485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.711584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.711622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.711726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.711754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.711847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.711873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.711960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.711985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.712099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.712124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.712210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.712235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.712355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.712381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.712501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.712532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.712651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.680 [2024-11-15 12:48:10.712678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.680 qpair failed and we were unable to recover it. 00:26:30.680 [2024-11-15 12:48:10.712811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.712867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.713017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.713064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.713171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.713222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.713325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.713384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.713528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.713554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.713647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.713679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.713799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.713827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.713952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.713977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.714087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.714113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.714222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.714248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.714398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.714440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.714592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.714627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.714735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.714781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.714906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.714931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.715099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.715133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.715281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.715315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.715418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.715468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.715593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.715635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.715742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.715772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.715887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.715912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.716042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.716076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.716227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.716262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.716373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.716418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.716520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.716554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.716700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.716763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.716879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.716905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.716978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.717004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.717110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.717136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.717277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.717302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.717397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.717422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.717524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.717559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.717735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.717783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.717917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.717942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.718059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.718085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.718185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.718220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.718327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.718361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.718467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.718502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.718629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.718659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.681 qpair failed and we were unable to recover it. 00:26:30.681 [2024-11-15 12:48:10.718755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.681 [2024-11-15 12:48:10.718782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.718866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.718890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.719005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.719050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.719212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.719255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.719452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.719494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.719597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.719631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.719799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.719826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.719937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.719962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.720074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.720100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.720209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.720234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.720376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.720401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.720480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.720505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.720612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.720647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.720829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.720855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.720968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.720993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.721079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.721105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.721209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.721234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.721345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.721391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.721534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.721568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.721681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.721715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.721859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.721884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.721993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.722018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.722165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.722190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.722306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.722331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.722440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.722473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.722618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.722643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.722766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.722797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.722885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.722910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.723030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.723055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.723169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.723195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.723278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.723326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.723471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.723505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.723614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.723639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.723751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.682 [2024-11-15 12:48:10.723776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.682 qpair failed and we were unable to recover it. 00:26:30.682 [2024-11-15 12:48:10.723890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.723915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.724004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.724030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.724173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.724207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.724351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.724386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.724524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.724559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.724705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.724748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.724903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.724941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.725027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.725055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.725199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.725245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.725353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.725402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.725508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.725539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.725636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.725663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.725744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.725772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.725891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.725918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.726033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.726059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.726147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.726178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.726298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.726324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.726437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.726467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.726565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.726592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.726677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.726728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.726852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.726879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.726993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.727026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.727139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.727166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.727278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.727304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.727387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.727418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.727570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.727597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.727681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.727708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.727867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.727894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.728005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.728034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.728156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.728182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.728287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.728313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.728439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.728467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.728609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.728642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.728743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.728771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.728871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.728910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.729032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.729059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.729173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.729199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.729337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.683 [2024-11-15 12:48:10.729363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.683 qpair failed and we were unable to recover it. 00:26:30.683 [2024-11-15 12:48:10.729472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.729498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.729580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.729605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.729690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.729715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.729864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.729897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.730035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.730067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.730198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.730230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.730341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.730372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.730510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.730541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.730686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.730725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.730817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.730842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.730950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.730975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.731131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.731162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.731270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.731303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.731433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.731465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.731600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.731631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.731778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.731804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.731886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.731911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.732035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.732060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.732145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.732169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.732275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.732300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.732457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.732487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.732595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.732625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.732729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.732773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.732889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.732916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.733055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.733081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.733167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.733193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.733304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.733330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.733473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.733498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.733624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.733654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.733741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.733768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.733878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.733905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.734013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.734044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.734194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.734243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.734383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.734430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.734549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.734582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.734705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.734756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.734897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.734943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.735048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.735076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.684 qpair failed and we were unable to recover it. 00:26:30.684 [2024-11-15 12:48:10.735255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.684 [2024-11-15 12:48:10.735292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.735422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.735447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.735554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.735582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.735690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.735716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.735890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.735925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.736079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.736114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.736252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.736301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.736445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.736476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.736608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.736640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.736786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.736812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.736910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.736941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.737081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.737113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.737245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.737277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.737406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.737448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.737605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.737631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.737710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.737740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.737822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.737847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.737959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.737984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.738109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.738140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.738301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.738333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.738465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.738497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.738629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.738661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.738816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.738842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.738948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.738972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.739053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.739104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.739243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.739274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.739465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.739498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.739628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.739672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.739761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.739787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.739881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.739906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.739983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.740028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.740207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1becf30 is same with the state(6) to be set 00:26:30.685 [2024-11-15 12:48:10.740363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.740405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.740566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.740603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.740780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.740812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.740926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.740953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.741107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.741132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.741270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.741301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.741433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.685 [2024-11-15 12:48:10.741468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.685 qpair failed and we were unable to recover it. 00:26:30.685 [2024-11-15 12:48:10.741595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.741625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.741747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.741781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.741880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.741906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.741993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.742019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.742125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.742155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.742258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.742288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.742388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.742418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.742541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.742571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.742674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.742703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.742856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.742881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.742995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.743020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.743158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.743183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.743291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.743316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.743444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.743476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.743609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.743639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.743757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.743782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.743906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.743931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.744019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.744044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.744158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.744183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.744341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.744383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.744485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.744515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.744640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.744670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.744790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.744816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.744925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.744950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.745044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.745070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.745173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.745202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.745325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.745359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.745488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.745517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.745661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.745699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.745802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.745832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.745913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.745940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.746066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.746113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.746197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.746223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.746315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.746349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.746469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.746495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.746604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.746629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.746721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.686 [2024-11-15 12:48:10.746748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.686 qpair failed and we were unable to recover it. 00:26:30.686 [2024-11-15 12:48:10.746832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.746857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.746944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.746969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.747104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.747129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.747229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.747258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.747382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.747411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.747492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.747521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.747659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.747688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.747816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.747844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.747982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.748031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.748172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.748219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.748327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.748363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.748500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.748528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.748618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.748644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.748762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.748788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.748928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.748954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.749068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.749092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.749203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.749233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.749347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.749372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.749504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.749532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.749657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.749685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.749845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.749871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.749982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.750024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.750120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.750148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.750273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.750302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.750425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.750453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.750572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.750601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.750698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.750734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.750868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.750893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.750977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.751003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.751092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.751117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.751229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.751257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.751377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.751405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.751493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.751521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.751619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.751644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.751789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.687 [2024-11-15 12:48:10.751815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.687 qpair failed and we were unable to recover it. 00:26:30.687 [2024-11-15 12:48:10.751930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.751956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.752061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.752086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.752171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.752197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.752279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.752304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.752420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.752446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.752566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.752593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.752743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.752785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.752868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.752893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.752981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.753006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.753123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.753148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.753281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.753308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.753427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.753455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.753574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.753615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.753695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.753725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.753838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.753864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.753952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.753979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.754101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.754129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.754222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.754250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.754356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.754383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.754497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.754524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.754611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.754637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.754723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.754750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.754861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.754891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.755026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.755055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.755195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.755239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.755395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.755439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.755579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.755605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.755735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.755763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.755894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.755922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.756018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.756045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.756156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.756183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.756268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.756296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.756384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.756411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.756534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.756562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.756670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.756698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.756829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.756856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.756984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.757031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.757167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.757194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.688 qpair failed and we were unable to recover it. 00:26:30.688 [2024-11-15 12:48:10.757324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.688 [2024-11-15 12:48:10.757353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.757443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.757469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.757611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.757640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.757769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.757797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.757908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.757942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.758064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.758090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.758177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.758205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.758305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.758332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.758476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.758503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.758643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.758668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.758760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.758786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.758906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.758932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.759016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.759056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.759197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.759223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.759338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.759365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.759472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.759498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.759610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.759636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.759788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.759814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.759923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.759948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.760020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.760045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.760165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.760191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.760276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.760302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.760440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.760466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.760570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.760597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.760736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.760767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.760906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.760932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.761023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.761048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.761143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.761169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.761259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.761285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.761398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.761424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.761542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.761568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.761681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.761707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.761877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.761902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.762020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.762045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.762134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.762159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.689 [2024-11-15 12:48:10.762239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.689 [2024-11-15 12:48:10.762265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.689 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.762370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.762395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.762471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.762496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.762598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.762637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.762792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.762822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.762934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.762961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.763102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.763130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.763245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.763271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.763395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.763423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.763512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.763538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.763678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.763704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.763811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.763837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.763948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.763974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.764111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.764137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.764252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.764277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.764388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.764413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.764503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.764532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.764669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.764694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.764811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.764836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.764983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.765008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.765124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.765149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.765224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.765249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.765326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.765352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.765465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.765490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.765610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.765640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.765764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.765792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.765881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.765907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.765998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.766024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.766125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.766152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.766259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.766291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.766382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.766408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.766490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.766515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.690 [2024-11-15 12:48:10.766600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.690 [2024-11-15 12:48:10.766625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.690 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.766747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.766773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.766883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.766909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.766986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.767012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.767124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.767149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.767257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.767282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.767391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.767416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.767534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.767563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.767681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.767709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.767867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.767894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.768009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.768035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.768113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.768142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.768260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.768285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.768370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.768396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.768512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.768537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.768625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.768650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.768790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.768816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.768935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.768960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.769039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.769064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.769170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.769195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.769270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.769295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.769434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.769459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.769545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.769570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.769674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.769699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.769845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.769870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.769996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.770025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.770148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.770176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.770258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.770284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.770413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.770441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.770522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.770549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.691 qpair failed and we were unable to recover it. 00:26:30.691 [2024-11-15 12:48:10.770632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.691 [2024-11-15 12:48:10.770658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.770775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.770802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.770909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.770935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.771049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.771074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.771216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.771241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.771320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.771346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.771433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.771458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.771571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.771600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.771758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.771790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.771882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.771909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.772017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.772044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.772163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.772190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.772306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.772334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.772419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.772446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.772558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.772590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.772733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.772760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.772847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.772880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.773003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.773029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.773194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.773221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.773317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.773344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.773459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.773490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.773592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.773618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.773766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.773801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.773923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.773949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.774061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.774090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.774217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.774245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.774335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.774362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.774466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.774494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.774579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.774613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.774734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.774768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.774880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.774906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.774990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.775017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.775138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.775165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.775285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.692 [2024-11-15 12:48:10.775312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.692 qpair failed and we were unable to recover it. 00:26:30.692 [2024-11-15 12:48:10.775427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.775455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.775542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.775569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.775662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.775688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.775812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.775850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.775973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.776000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.776085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.776111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.776193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.776218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.776336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.776361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.776443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.776469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.776554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.776578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.776696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.776728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.776869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.776895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.777042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.777067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.777165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.777191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.777330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.777355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.777452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.777480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.777622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.777649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.777797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.777825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.777925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.777952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.778072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.778104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.778226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.778253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.778340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.778366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.778478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.778503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.778612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.778638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.778746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.778772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.778849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.778874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.778952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.778977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.779063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.779088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.779203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.779228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.779336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.779361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.779455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.779483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.779575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.779601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.779686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.779716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.779810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.693 [2024-11-15 12:48:10.779836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.693 qpair failed and we were unable to recover it. 00:26:30.693 [2024-11-15 12:48:10.779917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.779943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.780048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.780075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.780160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.780186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.780299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.780327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.780481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.780507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.780650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.780676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.780768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.780794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.780905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.780930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.781021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.781047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.781125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.781151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.781260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.781285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.781428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.781453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.781566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.781591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.781677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.781702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.781814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.781839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.781919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.781945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.782023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.782048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.782163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.782196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.782326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.782353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.782465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.782491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.782577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.782604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.782771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.782808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.782953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.782984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.783079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.783105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.783217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.783243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.783353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.783379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.783468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.783494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.783576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.783604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.783711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.783752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.783900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.783927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.784017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.784043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.784164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.784190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.784275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.784304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.784442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.784469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.784583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.784614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.694 [2024-11-15 12:48:10.784700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.694 [2024-11-15 12:48:10.784745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.694 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.784868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.784894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.784983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.785008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.785124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.785150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.785266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.785291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.785406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.785431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.785518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.785543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.785628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.785653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.785734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.785760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.785834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.785859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.785972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.785997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.786111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.786136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.786221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.786246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.786391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.786416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.786524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.786550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.786694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.786727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.786835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.786860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.786946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.786971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.787105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.787131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.787239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.787264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.787348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.787373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.787486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.787512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.787597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.787622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.787733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.787759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.787846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.787871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.787952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.787977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.788090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.788115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.788196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.788221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.788332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.788357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.788472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.788497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.788604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.788629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.788764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.788789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.788908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.788933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.789021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.789050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.789134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.789160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.789242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.789268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.789377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.789402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.695 qpair failed and we were unable to recover it. 00:26:30.695 [2024-11-15 12:48:10.789494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.695 [2024-11-15 12:48:10.789519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.789605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.789630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.789710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.789750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.789866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.789900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.789992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.790018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.790106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.790133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.790257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.790284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.790421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.790446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.790533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.790558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.790668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.790694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.790812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.790838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.790921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.790947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.791056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.791081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.791171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.791196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.791311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.791336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.791445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.791470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.791582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.791606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.791693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.791725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.791832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.791857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.791973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.791998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.792087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.792112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.792228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.792253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.792339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.792364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.792479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.792505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.792615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.792641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.792746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.792772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.792911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.792936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.793053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.793078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.793166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.793190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.793303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.793328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.696 [2024-11-15 12:48:10.793423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.696 [2024-11-15 12:48:10.793467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.696 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.793556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.793583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.793710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.793750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.793868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.793896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.793985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.794013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.794125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.794151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.794261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.794287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.794426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.794451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.794561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.794586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.794661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.794687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.794795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.794821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.794935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.794960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.795043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.795068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.795185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.795210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.795297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.795322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.795411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.795436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.795514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.795539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.795679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.795704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.795815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.795841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.795986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.796012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.796117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.796142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.796256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.796282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.796395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.796420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.796559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.796584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.796697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.796730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.796844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.796868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.796979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.797004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.797081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.797110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.797250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.797276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.797412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.797437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.797587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.797626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.797732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.797762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.797875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.797907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.797995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.798021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.798160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.798193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.798337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.798365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.798490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.798516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.798631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.798657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.697 [2024-11-15 12:48:10.798759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.697 [2024-11-15 12:48:10.798784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.697 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.798895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.798921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.799032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.799057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.799148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.799173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.799287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.799311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.799392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.799417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.799492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.799517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.799630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.799658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.799750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.799777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.799854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.799881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.800017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.800043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.800194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.800221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.800333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.800361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.800444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.800471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.800578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.800604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.800704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.800740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.800869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.800896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.800987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.801012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.801122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.801147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.801256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.801281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.801368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.801393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.801508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.801532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.801614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.801643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.801736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.801764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.801910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.801938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.802027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.802053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.802140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.802171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.802290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.802316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.802430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.802457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.802604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.802629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.802748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.802773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.802868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.802893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.802977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.803002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.803109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.803134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.803280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.803304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.803441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.803466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.803575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.803599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.698 [2024-11-15 12:48:10.803741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.698 [2024-11-15 12:48:10.803767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.698 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.803878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.803902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.803983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.804008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.804121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.804146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.804233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.804257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.804336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.804361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.804456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.804486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.804642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.804669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.804773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.804801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.804916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.804942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.805058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.805083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.805225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.805249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.805335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.805360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.805449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.805473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.805556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.805580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.805695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.805731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.805860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.805888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.806003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.806033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.806126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.806153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.806260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.806285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.806377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.806403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.806511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.806536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.806643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.806668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.806769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.806795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.806888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.806913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.806998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.807023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.807125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.807149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.807268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.807300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.807392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.807419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.807538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.807565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.807689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.807716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.807822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.807853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.807950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.807977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.808092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.808118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.808196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.808220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.808332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.808357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.808441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.808465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.808549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.808574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.808651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.808675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.808823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.808857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.699 [2024-11-15 12:48:10.809006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.699 [2024-11-15 12:48:10.809033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.699 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.809149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.809183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.809303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.809329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.809418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.809444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.809557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.809585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.809699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.809735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.809864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.809893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.810012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.810040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.810187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.810222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.810338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.810364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.810488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.810515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.810646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.810685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.810789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.810816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.810902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.810927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.811042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.811067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.811176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.811201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.811289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.811315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.811433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.811462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.811574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.811600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.811677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.811709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.811826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.811852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.811936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.811962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.812087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.812114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.812226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.812253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.812348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.812376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.812485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.812511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.812586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.812616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.812714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.812751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.812859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.812885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.812971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.812998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.813111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.813140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.813229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.813260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.813356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.813387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.813485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.813519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.813637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.813665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.813782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.813819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.813934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.813964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.814061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.814087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.814204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.814236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.814359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.814386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.814476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.814504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.814629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.814668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.814794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.814821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.814936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.814963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.815076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.700 [2024-11-15 12:48:10.815100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.700 qpair failed and we were unable to recover it. 00:26:30.700 [2024-11-15 12:48:10.815213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.815238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.815356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.815381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.815501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.815526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.815612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.815637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.815748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.815773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.815877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.815902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.816011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.816038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.816123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.816148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.816230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.816255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.816371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.816396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.816487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.816511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.816589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.816614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.816729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.816755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.816895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.816921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.817029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.817053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.817163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.817192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.817306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.817330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.817423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.817453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.817543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.817570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.817707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.817745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.817870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.817896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.818009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.818033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.818141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.818165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.818303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.818327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.818438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.818462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.818583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.818608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.818697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.818736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.818824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.818853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.818976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.819003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.819126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.819156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.819325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.819352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.819494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.819522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.819605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.819631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.819745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.819770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.819851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.819876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.819987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.820012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.820099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.820124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.820246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.820270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.820356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.820380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.701 qpair failed and we were unable to recover it. 00:26:30.701 [2024-11-15 12:48:10.820490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.701 [2024-11-15 12:48:10.820514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.820630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.820655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.820748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.820778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.820867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.820897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.821051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.821078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.821191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.821218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.821336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.821368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.821466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.821494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.821610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.821637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.821758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.821786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.821905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.821932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.822025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.822052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.822145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.822171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.822284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.822310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.822414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.822439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.822524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.822550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.822625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.822649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.822738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.822764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.822842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.822867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.822949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.822974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.823059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.823084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.823195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.823221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.823302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.823328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.823440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.823465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.823549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.823574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.823684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.823709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.823828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.823853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.823930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.823955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.824071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.824097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.824209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.824234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.824314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.824343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.824421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.824446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.824559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.824584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.824659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.824683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.824801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.824827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.824946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.824971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.825057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.825082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.825158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.825183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.825299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.825328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.825444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.825471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.825547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.825578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.825731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.825759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.825874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.825900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.826028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.826054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.702 [2024-11-15 12:48:10.826146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.702 [2024-11-15 12:48:10.826173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.702 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.826263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.826288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.826401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.826427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.826539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.826565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.826703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.826739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.826827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.826852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.826932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.826958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.827097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.827122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.827205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.827229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.827304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.827330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.827432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.827457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.827569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.827594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.827677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.827702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.827828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.827861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.827976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.828001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.828107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.828132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.828227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.828253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.828335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.828360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.828480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.828508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.828593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.828619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.828734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.828762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.828846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.828873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.828991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.829026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.829176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.829209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.829324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.829351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.829489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.829514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.829620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.829645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.829763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.829789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.829876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.829901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.830015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.830040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.830116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.830141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.830254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.830279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.830414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.830439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.830527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.830552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.830628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.830653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.830771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.830797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.830909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.830934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.831020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.831045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.831150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.831175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.831290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.831315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.831400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.831429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.831511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.831536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.831651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.831680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.831783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.831816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.831936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.831963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.703 qpair failed and we were unable to recover it. 00:26:30.703 [2024-11-15 12:48:10.832040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.703 [2024-11-15 12:48:10.832068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.832151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.832179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.832270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.832297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.832376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.832402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.832481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.832506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.832581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.832605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.832689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.832715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.832832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.832857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.832971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.832996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.833083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.833108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.833221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.833246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.833362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.833387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.833505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.833530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.833637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.833663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.833800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.833827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.833914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.833939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.834018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.834043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.834128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.834153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.834236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.834264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.834352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.834379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.834495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.834521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.834630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.834656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.834782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.834815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.834956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.834982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.835101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.835128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.835208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.835234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.835346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.835372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.835454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.835480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.835558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.835584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.835676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.835702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.835826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.835851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.835962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.835988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.836093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.836118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.836201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.836226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.836299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.836323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.836439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.836464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.836607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.836632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.836721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.836747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.836864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.836889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.836978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.837004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.837117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.837143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.837261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.837286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.837399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.837424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.837509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.837534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.837623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.704 [2024-11-15 12:48:10.837648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.704 qpair failed and we were unable to recover it. 00:26:30.704 [2024-11-15 12:48:10.837766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.837796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.837885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.837912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.838034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.838062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.838138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.838166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.838247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.838278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.838358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.838384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.838495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.838521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.838630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.838655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.838740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.838767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.838874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.838899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.839012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.839038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.839121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.839146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.839287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.839320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.839415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.839446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.839548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.839600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.839797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.839826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.839979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.840027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.840166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.840210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.840355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.840389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.840558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.840585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.840700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.840737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.840826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.840852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.840970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.840995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.841076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.841101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.841213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.841239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.841351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.841376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.841483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.841509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.841613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.841638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.841731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.841757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.841893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.841919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.842011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.842036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.842121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.842151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.842264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.842290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.842412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.842441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.842557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.842583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.842730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.842758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.842872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.842898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.843011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.843039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.843131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.843158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.843245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.705 [2024-11-15 12:48:10.843273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.705 qpair failed and we were unable to recover it. 00:26:30.705 [2024-11-15 12:48:10.843366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.843393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.843503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.843530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.843624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.843651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.843759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.843786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.843880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.843910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.844003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.844030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.844145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.844171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.844259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.844289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.844404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.844430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.844547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.844575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.844691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.844724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.844870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.844897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.845042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.845067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.845205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.845230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.845322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.845347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.845494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.845519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.845606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.845632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.845748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.845774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.845891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.845916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.846029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.846055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.846165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.846190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.846301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.846327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.846411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.846436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.846552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.846581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.846733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.846762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.846874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.846901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.847052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.847080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.847172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.847209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.847324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.847351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.847496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.847523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.847603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.847630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.847743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.847769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.847880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.847905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.847991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.848016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.848118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.848144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.848246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.848271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.848382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.848407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.848492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.848518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.848658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.848686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.848806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.848837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.848958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.848985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.849065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.849097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.849219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.849245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.849388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.849419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.849515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.849542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.849666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.849692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.849821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.849846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.706 [2024-11-15 12:48:10.849952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.706 [2024-11-15 12:48:10.849978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.706 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.850063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.850088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.850198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.850224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.850303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.850329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.850435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.850461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.850601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.850625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.850701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.850739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.850886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.850918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.851042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.851068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.851209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.851236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.851351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.851377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.851513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.851550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.851671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.851697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.851795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.851821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.851927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.851952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.852027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.852053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.852164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.852190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.852328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.852353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.852504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.852532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.852623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.852649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.852770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.852798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.852880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.852907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.852990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.853016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.853139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.853166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.853283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.853311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.853415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.853443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.853528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.853559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.853641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.853668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.853809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.853835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.853914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.853939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.854053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.854078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.854230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.854266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.854358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.854384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.854519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.854545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.854657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.854683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.854799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.854824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.854944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.854969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.855054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.855080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.855196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.855226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.855306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.855332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.855450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.855475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.855584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.855609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.855696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.855728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.855842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.855868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.855979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.856005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.856086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.856112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.856200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.856225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.856337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.856362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.856474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.856499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.856584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.856609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.707 qpair failed and we were unable to recover it. 00:26:30.707 [2024-11-15 12:48:10.856688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.707 [2024-11-15 12:48:10.856713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.856829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.856855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.857005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.857031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.857115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.857140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.857228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.857254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.857358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.857384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.857462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.857487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.857625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.857651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.857761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.857787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.857870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.857897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.857984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.858010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.858144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.858170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.858248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.858273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.858360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.858385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.858497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.858523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.858633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.858658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.858779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.858805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.858920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.858946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.859058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.859083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.859167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.859192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.859275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.859301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.859380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.859405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.859492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.859518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.859630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.859655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.859742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.859768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.859901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.859926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.860013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.860038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.860152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.860177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.860286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.860311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.860395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.860421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.860538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.860564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.860644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.860670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.860757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.860784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.860899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.860925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.861059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.861084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.861223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.861248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.861357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.861382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.861497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.861522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.861611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.861637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.861750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.861776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.861855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.861881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.862002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.862027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.862111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.862136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.862254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.862280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.862392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.862417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.862499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.862524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.708 qpair failed and we were unable to recover it. 00:26:30.708 [2024-11-15 12:48:10.862610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.708 [2024-11-15 12:48:10.862636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.862757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.862782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.862899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.862924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.863033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.863058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.863140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.863165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.863246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.863271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.863381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.863407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.863510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.863536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.863647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.863672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.863808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.863833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.863939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.863968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.864047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.864073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.864196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.864228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.864319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.864350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.864478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.864510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.864663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.864702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.864864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.864892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.865014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.865040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.865205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.865250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.865380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.865427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.865541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.865567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.865684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.865712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.865812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.865838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.865945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.865971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.866066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.866092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.866237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.866263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.866376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.866403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.866533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.866571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.866736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.866764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.866845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.866870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.866958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.866983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.867070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.867095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.867231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.867256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.867388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.867438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.867574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.867600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.867742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.867769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.867903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.867950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.868119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.868166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.868269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.868302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.868468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.868494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.868586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.868611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.868698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.868729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.868846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.868871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.868991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.869016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.869130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.869155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.869234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.869260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.869367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.869391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.869472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.869497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.869582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.869608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.869697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.869727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.709 [2024-11-15 12:48:10.869839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.709 [2024-11-15 12:48:10.869864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.709 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.869956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.869982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.870095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.870120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.870198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.870223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.870309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.870335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.870444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.870469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.870550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.870576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.870698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.870742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.870830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.870857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.870946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.870973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.871093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.871121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.871211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.871238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.871318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.871344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.871461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.871488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.871608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.871634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.871743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.871769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.871854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.871880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.871990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.872015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.872099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.872124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.872245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.872270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.872356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.872381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.872491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.872516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.872598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.872623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.872759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.872785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.872871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.872897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.873009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.873035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.873119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.873144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.873252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.873278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.873358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.873384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.873500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.873525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.873605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.873629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.873773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.873802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.873896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.873922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.874013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.874039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.874138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.874186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.874278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.874305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.874392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.874419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.874536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.874562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.874654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.874679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.874765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.874790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.874907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.874932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.875048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.875074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.875155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.875180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.875261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.875289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.875369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.875395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.875521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.875547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.875689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.875715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.875859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.875904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.875979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.876005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.876152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.876200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.710 [2024-11-15 12:48:10.876373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.710 [2024-11-15 12:48:10.876421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.710 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.876535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.876561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.876723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.876765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.876875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.876907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.877074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.877106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.877272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.877304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.877436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.877468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.877576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.877608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.877742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.877770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.877903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.877948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.878082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.878125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.878286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.878336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.878500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.878545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.878657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.878683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.878852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.878886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.879049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.879081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.879244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.879276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.879370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.879401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.879542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.879573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.879736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.879781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.879936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.879970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.880079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.880113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.880250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.880282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.880390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.880418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.880512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.880540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.880653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.880680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.880802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.880830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.880945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.880971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.881085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.881111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.881229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.881255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.881363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.881389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.881500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.881529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.881667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.881693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.881785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.881811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.881920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.881945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.882027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.882054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.882135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.882162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.882275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.882301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.882461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.882508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.882616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.882642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.882784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.882831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.882944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.882970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.883110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.883136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.883248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.883274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.883390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.883416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.883533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.883558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.883670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.883697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.883843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.883892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.884039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.884085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.884197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.884223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.884332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.884357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.711 [2024-11-15 12:48:10.884464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.711 [2024-11-15 12:48:10.884490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.711 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.884626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.884652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.884794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.884821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.884941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.884968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.885080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.885105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.885250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.885276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.885362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.885388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.885498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.885529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.885646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.885672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.885765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.885792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.885935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.885961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.886100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.886126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.886287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.886348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.886493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.886519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.886639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.886677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.886820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.886866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.886998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.887030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.887168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.887199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.887330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.887362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.887491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.887523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.887654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.887682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.887840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.887867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.887974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.888007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.888157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.888203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.888346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.888390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.888502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.888528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.888641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.888667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.888754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.888782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.888899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.888925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.889067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.889093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.889203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.889229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.889346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.889372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.889480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.889505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.889644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.889670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.889820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.889847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.889991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.890044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.890158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.890184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.890265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.712 [2024-11-15 12:48:10.890291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.712 qpair failed and we were unable to recover it. 00:26:30.712 [2024-11-15 12:48:10.890428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.890454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.890593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.890619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.890738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.890765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.890930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.890977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.891120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.891167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.891281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.891307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.891400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.891426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.891507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.891533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.891643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.891669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.891814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.891858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.892003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.892030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.892118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.892143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.892251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.892277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.892383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.892409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.892526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.892551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.892639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.892664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.892780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.892806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.892921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.892946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.893102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.893128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.893241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.893267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.893372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.893397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.893503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.893528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.893607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.893632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.893732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.893758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.893840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.893865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.893942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.893968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.894109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.894142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.894253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.894294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.894374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.894399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.894510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.894535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.894645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.894669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.894785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.894811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.894919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.894944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.895052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.895084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.895202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.895250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.895391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.895424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.895550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.713 [2024-11-15 12:48:10.895589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.713 qpair failed and we were unable to recover it. 00:26:30.713 [2024-11-15 12:48:10.895735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.895761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.895845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.895870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.895983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.896007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.896094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.896127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.896294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.896327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.896430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.896465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.896565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.896598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.896743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.896786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.896900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.896925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.897004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.897029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.897148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.897173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.897261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.897286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.897403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.897428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.897551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.897590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.897710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.897751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.897852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.897880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.897991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.898017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.898129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.898155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.898272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.898298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.898409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.898435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.898579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.898605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.898702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.898752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.898881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.898908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.899047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.899073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.899159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.899184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.899275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.899299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.899413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.899443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.899561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.899588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.899715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.899747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.899825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.899851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.899966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.899993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.900078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.900104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.900196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.900222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.900333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.900359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.900451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.900491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.900613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.900640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.900732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.900761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.900873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.714 [2024-11-15 12:48:10.900899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.714 qpair failed and we were unable to recover it. 00:26:30.714 [2024-11-15 12:48:10.900983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.901009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.901135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.901161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.901245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.901271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.901410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.901436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.901511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.901537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.901649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.901675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.901807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.901833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.901942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.901968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.902076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.902102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.902246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.902272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.902387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.902413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.902533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.902560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.902674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.902701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.902788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.902814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.902912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.902938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.903015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.903041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.903153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.903179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.903317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.903343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.903457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.903483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.903569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.903595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.903701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.903732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.903837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.903864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.903988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.904027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.904109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.904135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.904247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.904273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.904381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.904407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.904522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.904547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.904626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.904652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.904764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.904791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.904945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.904973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.905115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.905141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.905235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.905261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.905346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.905371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.905509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.905535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.905649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.905676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.715 qpair failed and we were unable to recover it. 00:26:30.715 [2024-11-15 12:48:10.905799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.715 [2024-11-15 12:48:10.905826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.905929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.905956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.906103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.906130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.906244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.906270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.906381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.906407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.906521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.906550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.906664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.906691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.906819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.906854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.906959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.906987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.907072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.907097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.907215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.907240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.907384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.907410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.907527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.907555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.907637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.907662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.907754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.907780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.907871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.907897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.907985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.908010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.908084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.908109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.908215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.908240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.908348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.908373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.908453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.908483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.908559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.908584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.908729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.908758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.908840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.908867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.909005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.909031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.909143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.909170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.909279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.909306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.909399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.716 [2024-11-15 12:48:10.909427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.716 qpair failed and we were unable to recover it. 00:26:30.716 [2024-11-15 12:48:10.909538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.909566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.909707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.909740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.909848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.909874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.909956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.909983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.910123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.910149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.910267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.910294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.910415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.910442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.910580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.910606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.910735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.910761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.910849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.910875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.910961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.910987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.911101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.911129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.911212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.911239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.911389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.911415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.911525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.911551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.911635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.911661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.911791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.911819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.911931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.911956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.912100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.912126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.912242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.912274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.912362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.912389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.717 [2024-11-15 12:48:10.912509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.717 [2024-11-15 12:48:10.912538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.717 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.912627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.912654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.912738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.912768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.912883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.912910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.913028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.913054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.913167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.913195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.913316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.913343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.913482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.913507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.913591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.913618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.913735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.913773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.913926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.913951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.914088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.914114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.914233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.914261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.914350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.914376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.914485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.914511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.914590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.914617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.914709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.914742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.914835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.914861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.914966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.914992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.915133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.915159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.915244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.915270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.915379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.915404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.915547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.915572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.915695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.915745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.915845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.915872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.915987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.916013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.916088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.916113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.718 qpair failed and we were unable to recover it. 00:26:30.718 [2024-11-15 12:48:10.916228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.718 [2024-11-15 12:48:10.916254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.916367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.916392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.916476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.916503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.916622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.916650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.916765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.916792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.916872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.916897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.917024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.917050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.917192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.917218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.917338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.917365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.917480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.917509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.917623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.917649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.917757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.917797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.917917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.917944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.918053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.918079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.918157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.918183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.918290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.918316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.918398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.918424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.918536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.918562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.918698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.918729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.918816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.918842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.918964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.918992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.919099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.919125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.919208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.919234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.919315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.919340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.919478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.919504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.919626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.919652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.919737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.719 [2024-11-15 12:48:10.919765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.719 qpair failed and we were unable to recover it. 00:26:30.719 [2024-11-15 12:48:10.919883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.919910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.919989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.920015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.920141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.920168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.920307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.920333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.920451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.920477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.920587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.920613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.920727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.920754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.920840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.920866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.920975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.921001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.921115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.921141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.921251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.921277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.921394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.921423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.921553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.921591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.921710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.921747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.921837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.921863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.921953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.921979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.922060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.922085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.922228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.922256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.922378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.922405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.922547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.922573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.922713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.922745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.922829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.922855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.922944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.922970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.923082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.923108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.923221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.923254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.923371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.923398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.923550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.720 [2024-11-15 12:48:10.923589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.720 qpair failed and we were unable to recover it. 00:26:30.720 [2024-11-15 12:48:10.923713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.923757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.923870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.923896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.924008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.924034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.924151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.924177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.924258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.924284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.924420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.924447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.924557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.924583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.924665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.924691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.924805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.924831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.924923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.924949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.925059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.925085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.925166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.925193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.925303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.925329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.925403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.925429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.925540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.925566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.925723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.925762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.925876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.925903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.926021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.926047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.926129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.926155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.926272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.926299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.926387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.926412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.926551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.926576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.926686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.926711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.926830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.926856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.926946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.926980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.927061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.721 [2024-11-15 12:48:10.927086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.721 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-15 12:48:10.927199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.927225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.927337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.927364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.927473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.927499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.927644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.927670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.927793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.927819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.927907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.927933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.928048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.928074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.928188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.928215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.928296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.928322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.928476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.928515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.928636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.928662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.928751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.928777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.928894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.928919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.929002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.929028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.929136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.929161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.929242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.929267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.929379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.929404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.929517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.929542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.929623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.929651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.929740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.929767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.929880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.929906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.930049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.930075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.930185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.930213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.930300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.930326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.930405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.930431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.930526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.930561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.930680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.930707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-15 12:48:10.930805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-15 12:48:10.930831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.930949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.930975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.931117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.931142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.931254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.931279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.931421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.931448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.931561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.931588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.931695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.931732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.931815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.931842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.931931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.931958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.932035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.932060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.932206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.932232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.932344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.932370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.932461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.932486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.932571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.932598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.932722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.932749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.932859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.932884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.933003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.933028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.933138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.933165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.933283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.933308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.933391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.933418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.933506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.933533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-15 12:48:10.933648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-15 12:48:10.933674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.933799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.933825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.933943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.933968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.934084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.934111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.934225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.934251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.934390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.934416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.934538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.934577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.934729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.934757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.934848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.934875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.934990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.935016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.935128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.935154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.935239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.935266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.935378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.935404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.935546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.935572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.935682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.935708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.935861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.935887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.936008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.936036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.936123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.936155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.936279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.936305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.936446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.936472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.936561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.936588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.936669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.936697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.936823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.936849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.936988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.937013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.937099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.937124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.937202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.937227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-15 12:48:10.937334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-15 12:48:10.937359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.937448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.937473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.937620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.937648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.937794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.937821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.937966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.937994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.938117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.938143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.938229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.938255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.938346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.938372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.938466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.938492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.938633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.938658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.938755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.938783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.938896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.938922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.939040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.939066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.939152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.939178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.939290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.939316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.939432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.939458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.939571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.939597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.939734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.939760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.939852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.939884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.940002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.940028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.940135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.940160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.942853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.942894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.943028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.943057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.943176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.943202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.943292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.943318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.943423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.943449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.943566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-15 12:48:10.943592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-15 12:48:10.943712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.943750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.943892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.943918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.944071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.944097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.944214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.944239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.944353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.944379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.944502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.944529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.944619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.944644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.944739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.944766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.944939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.944992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.945123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.945173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.945263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.945289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.945430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.945457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.945569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.945595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.945711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.945747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.945861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.945888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.945979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.946006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.946092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.946118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.946197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.946224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.946342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.946368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.946475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.946501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.946668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.946707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.946867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.946895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.947019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.947045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.947159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.947185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.947276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.947302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.947411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.947436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-15 12:48:10.947516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-15 12:48:10.947544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.947663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.947689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.947846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.947899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.948058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.948095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.948270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.948305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.948489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.948560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.948730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.948782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.948893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.948920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.949091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.949126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.949270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.949306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.949440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.949476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.949629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.949657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.949763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.949790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.949903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.949930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.950076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.950125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.950261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.950304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.950443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.950469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.950610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.950636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.950767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.950805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.950934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.950961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.951131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.951168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.951348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.951384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.951521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.951557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.951761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.951800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.951948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.951975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.952082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.952107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-15 12:48:10.952219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-15 12:48:10.952247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.952336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.952362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.952472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.952498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.952594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.952620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.952747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.952786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.952917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.952946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.953076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.953114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.953265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.953292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.953380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.953406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.953511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.953536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.953654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.953681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.953802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.953829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.953948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.953975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.954058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.954084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.954191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.954216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.954357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.954383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.954526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.954552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.954636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.954666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.954763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.954790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.954901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.954927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.955047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.955073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.955188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.955213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.955334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.955360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.955498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.955524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.955618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.955644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.955757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.955783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.955898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.955924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-15 12:48:10.956045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-15 12:48:10.956071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.956189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.956215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.956356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.956382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.956472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.956498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.956637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.956663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.956799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.956847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.956988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.957035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.957188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.957235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.957353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.957379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.957468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.957494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.957573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.957599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.957743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.957770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.957874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.957923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.958061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.958086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.958212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.958238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.958378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.958403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.958483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.958509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.958626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.958651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.958792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.958843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.958953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.958983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.959099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.959125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.959238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.959264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.959400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.959425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.959508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.959534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.959616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.959642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.959785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.959811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-15 12:48:10.959918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-15 12:48:10.959944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.960059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.960084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.960168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.960193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.960278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.960302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.960425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.960464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.960587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.960616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.960707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.960741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.960865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.960891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.961005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.961031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.961139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.961165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.961262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.961290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.961402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.961427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.961518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.961544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.961632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.961657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.961751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.961789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.961913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.961940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.962030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.962057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.962136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.962161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.962277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.962303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.962390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.962418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.962533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.962560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.962639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-15 12:48:10.962665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-15 12:48:10.962760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.962787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.962871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.962899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.963043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.963069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.963210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.963235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.963351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.963379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.963524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.963550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.963671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.963697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.963845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.963872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.964017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.964043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.964156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.964182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.964266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.964293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.964399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.964430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.964577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.964604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.964735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.964774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.964871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.964898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.965004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.965030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.965145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.965170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.965288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.965314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.965404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.965431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.965530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.965570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.965727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.965755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.965868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.965894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.965983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.966008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.966101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.966127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.966209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.966235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.966352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.966377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-15 12:48:10.966464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-15 12:48:10.966490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.966575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.966600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.966710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.966745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.966861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.966886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.966978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.967004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.967110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.967135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.967217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.967244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.967361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.967389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.967518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.967557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.967679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.967707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.967839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.967866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.967948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.967974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.968069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.968095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.968211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.968240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.968357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.968382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.968527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.968552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.968690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.968716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.968838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.968886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.969038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.969085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.969225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.969272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.969365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.969391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.969482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.969509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.969619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.969644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.969783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.969809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.969949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.969975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.970117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.970147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.732 [2024-11-15 12:48:10.970266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.732 [2024-11-15 12:48:10.970292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.732 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.970433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.970459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.970597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.970645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.970773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.970812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.970899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.970926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.971047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.971083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.971293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.971328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.971464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.971520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.971690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.971731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.971860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.971885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.971968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.971993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.972156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.972190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.972359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.972401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.972600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.972642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.972795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.972822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.972936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.972979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.973160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.973220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.973391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.973426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.973585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.973611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.973727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.973753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.973896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.973921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.974061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.974095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.974239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.974275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.974420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.974457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.974598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.974634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.974786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.974812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.974929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.733 [2024-11-15 12:48:10.974954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.733 qpair failed and we were unable to recover it. 00:26:30.733 [2024-11-15 12:48:10.975044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.975069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.975183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.975208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.975310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.975345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.975478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.975503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.975616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.975641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.975730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.975756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.975835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.975861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.975966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.975992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.976086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.976121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.976288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.976322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.976460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.976494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.976620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.976645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.976754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.976780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.976923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.976949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.977086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.977120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.977288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.977322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.977496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.977530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.977685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.977728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.977889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.977914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.978024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.978077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.978257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.978299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.978474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.978516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.978700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.978732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.978816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.978841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.978927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.978952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.979062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.734 [2024-11-15 12:48:10.979087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.734 qpair failed and we were unable to recover it. 00:26:30.734 [2024-11-15 12:48:10.979170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.979217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.979385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.979420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.979545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.979579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.979744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.979800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.979919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.979965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.980148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.980186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.980374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.980411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.980563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.980601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.980790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.980817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.980932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.980958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.981067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.981093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.981231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.981258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.981397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.981423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.981579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.981637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.981755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.981795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.981957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.981983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.982099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.982124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.982261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.982297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.982442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.982478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.982598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.982635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.982789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.982814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.982960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.982986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.983151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.983187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.983352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.983388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.983539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.983575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.983689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.735 [2024-11-15 12:48:10.983739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.735 qpair failed and we were unable to recover it. 00:26:30.735 [2024-11-15 12:48:10.983852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.983877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.736 qpair failed and we were unable to recover it. 00:26:30.736 [2024-11-15 12:48:10.984020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.984045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.736 qpair failed and we were unable to recover it. 00:26:30.736 [2024-11-15 12:48:10.984246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.984306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.736 qpair failed and we were unable to recover it. 00:26:30.736 [2024-11-15 12:48:10.984446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.984471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.736 qpair failed and we were unable to recover it. 00:26:30.736 [2024-11-15 12:48:10.984594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.984640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.736 qpair failed and we were unable to recover it. 00:26:30.736 [2024-11-15 12:48:10.984753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.984780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.736 qpair failed and we were unable to recover it. 00:26:30.736 [2024-11-15 12:48:10.984895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.984921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.736 qpair failed and we were unable to recover it. 00:26:30.736 [2024-11-15 12:48:10.985005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.985030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.736 qpair failed and we were unable to recover it. 00:26:30.736 [2024-11-15 12:48:10.985185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.985221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.736 qpair failed and we were unable to recover it. 00:26:30.736 [2024-11-15 12:48:10.985423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.985459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.736 qpair failed and we were unable to recover it. 00:26:30.736 [2024-11-15 12:48:10.985601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.985636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.736 qpair failed and we were unable to recover it. 00:26:30.736 [2024-11-15 12:48:10.985785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.985811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.736 qpair failed and we were unable to recover it. 00:26:30.736 [2024-11-15 12:48:10.985947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.985973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.736 qpair failed and we were unable to recover it. 00:26:30.736 [2024-11-15 12:48:10.986084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.986127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.736 qpair failed and we were unable to recover it. 00:26:30.736 [2024-11-15 12:48:10.986283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.986319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:30.736 qpair failed and we were unable to recover it. 00:26:30.736 [2024-11-15 12:48:10.986539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.736 [2024-11-15 12:48:10.986576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.986747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.986772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.986909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.986934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.987038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.987076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.987175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.987201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.987281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.987307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.987393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.987418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.987529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.987554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.987663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.987689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.987788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.987827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.987913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.987939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.988030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.988057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.988170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.988196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.988320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.988346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.988453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.988492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.988616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.988643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.988740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.988767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.988852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.988878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.988970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.988995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.989104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.989129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.989238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.021 [2024-11-15 12:48:10.989264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.021 qpair failed and we were unable to recover it. 00:26:31.021 [2024-11-15 12:48:10.989355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.989384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.989471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.989498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.989592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.989621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.989738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.989764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.989872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.989899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.989989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.990015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.990161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.990188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.990275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.990300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.990382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.990410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.990522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.990548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.990628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.990654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.990767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.990794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.990881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.990907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.990986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.991012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.991097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.991123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.991213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.991241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.991320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.991345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.991430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.991456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.991534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.991559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.991667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.991697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.991785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.991812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.991897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.991923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.992039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.992065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.992140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.992166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.992279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.992306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.992395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.992422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.992511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.992536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.992671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.992696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.992841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.992877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.993011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.993039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.993147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.993172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.993312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.993338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.993418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.993443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.993529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.993554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.993635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.993660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.993742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.993769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.993891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.993916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.994032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.022 [2024-11-15 12:48:10.994058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.022 qpair failed and we were unable to recover it. 00:26:31.022 [2024-11-15 12:48:10.994145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.994171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.994282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.994310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.994420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.994447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.994527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.994552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.994643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.994668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.994757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.994783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.994890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.994915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.995041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.995077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.995229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.995277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.995420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.995445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.995562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.995590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.995707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.995742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.995886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.995913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.996026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.996053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.996164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.996190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.996305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.996331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.996413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.996439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.996548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.996574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.996686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.996711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.996840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.996865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.996978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.997003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.997115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.997141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.997283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.997308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.997425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.997453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.997567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.997593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.997709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.997742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.997838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.997864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.997979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.998005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.998089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.998115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.998222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.998248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.998388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.998413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.998528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.998553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.998669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.998694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.998779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.998805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.998924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.998949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.999093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.999135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.999286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.999322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.999496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.023 [2024-11-15 12:48:10.999550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.023 qpair failed and we were unable to recover it. 00:26:31.023 [2024-11-15 12:48:10.999672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:10.999699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:10.999797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:10.999824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:10.999904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:10.999930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.000070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.000108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.000248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.000286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.000440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.000477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.000620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.000648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.000765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.000791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.000930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.000957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.001065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.001092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.001169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.001195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.001280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.001306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.001426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.001452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.001593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.001619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.001731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.001758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.001872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.001898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.002015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.002041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.002158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.002185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.002268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.002295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.002406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.002432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.002547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.002572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.002697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.002731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.002873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.002900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.003042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.003068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.003184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.003211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.003324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.003351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.003439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.003466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.003613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.003640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.003785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.003812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.003924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.003950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.004039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.004066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.004178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.004204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.004320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.004347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.004433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.004460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.004539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.004566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.004740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.004778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.004869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.004897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.005038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.024 [2024-11-15 12:48:11.005069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.024 qpair failed and we were unable to recover it. 00:26:31.024 [2024-11-15 12:48:11.005178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.005204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.005345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.005371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.005487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.005514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.005631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.005658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.005800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.005838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.005966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.005993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.006135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.006161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.006257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.006282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.006395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.006421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.006510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.006538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.006651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.006677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.006821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.006848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.006963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.006990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.007081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.007108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.007196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.007222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.007342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.007370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.007505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.007531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.007612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.007637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.007727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.007753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.007866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.007892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.007972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.007996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.008110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.008135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.008270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.008295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.008403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.008428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.008517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.008545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.008661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.008688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.008805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.008836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.008922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.008949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.009089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.009116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.009227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.009254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.009372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.009399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.009482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.009508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.009599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.009624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.009764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.009790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.009868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.009894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.025 qpair failed and we were unable to recover it. 00:26:31.025 [2024-11-15 12:48:11.010010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.025 [2024-11-15 12:48:11.010036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.010146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.010172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.010281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.010306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.010444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.010469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.010547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.010572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.010692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.010727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.010845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.010870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.011022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.011048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.011154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.011179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.011296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.011321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.011429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.011455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.011564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.011589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.011732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.011758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.011877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.011905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.012020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.012047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.012162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.012190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.012294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.012320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.012430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.012457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.012544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.012578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.012658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.012685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.012781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.012808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.012955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.012982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.013095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.013121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.013232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.013258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.013339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.013365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.013479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.013506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.013625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.013664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.013790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.013819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.013910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.013936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.014050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.014077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.014188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.014215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.014328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.014355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.014473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.014500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.026 [2024-11-15 12:48:11.014581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.026 [2024-11-15 12:48:11.014607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.026 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.014687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.014714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.014834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.014860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.014949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.014975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.015116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.015144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.015228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.015254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.015348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.015375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.015484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.015511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.015665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.015704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.015864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.015892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.016030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.016055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.016137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.016163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.016285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.016311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.016421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.016450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.016569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.016596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.016712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.016743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.016884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.016910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.017034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.017060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.017198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.017225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.017362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.017388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.017504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.017529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.017642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.017669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.017752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.017778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.017858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.017884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.018021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.018048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.018160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.018191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.018315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.018343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.018482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.018508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.018624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.018653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.018744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.018771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.018889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.018914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.019027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.019053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.019169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.019196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.019306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.019333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.019448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.019477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.019592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.019620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.019704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.019738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.027 [2024-11-15 12:48:11.019880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.027 [2024-11-15 12:48:11.019907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.027 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.020014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.020040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.020135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.020162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.020247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.020275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.020355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.020383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.020496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.020522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.020661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.020687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.020814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.020842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.020954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.020982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.021067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.021092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.021205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.021233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.021348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.021374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.021487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.021512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.021620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.021645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.021739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.021766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.021895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.021929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.022067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.022093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.022212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.022238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.022320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.022348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.022441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.022468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.022612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.022638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.022759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.022786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.022899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.022925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.023017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.023042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.023114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.023141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.023249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.023274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.023350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.023375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.023493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.023518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.023635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.023663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.023785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.023812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.023954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.023981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.024072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.024099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.024185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.024212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.024298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.024326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.024436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.024462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.024571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.024596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.024674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.024699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.024798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.028 [2024-11-15 12:48:11.024823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.028 qpair failed and we were unable to recover it. 00:26:31.028 [2024-11-15 12:48:11.024940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.024965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.025049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.025076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.025189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.025217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.025306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.025333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.025478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.025504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.025591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.025617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.025737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.025763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.025882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.025907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.026016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.026041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.026158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.026183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.026296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.026321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.026461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.026487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.026570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.026596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.026680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.026708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.026868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.026896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.027008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.027035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.027208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.027255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.027391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.027439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.027556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.027582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.027698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.027729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.027821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.027847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.027922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.027948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.028072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.028098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.028172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.028198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.028308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.028358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.028447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.028473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.028585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.028613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.028732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.028759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.028878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.028905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.028991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.029018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.029161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.029187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.029306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.029332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.029448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.029485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.029628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.029655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.029767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.029794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.029931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.029958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.030047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.030073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.029 qpair failed and we were unable to recover it. 00:26:31.029 [2024-11-15 12:48:11.030213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.029 [2024-11-15 12:48:11.030239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.030355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.030382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.030491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.030517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.030615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.030653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.030790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.030829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.030979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.031007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.031102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.031130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.031278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.031311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.031425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.031452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.031567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.031596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.031730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.031757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.031876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.031903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.032015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.032042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.032162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.032189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.032309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.032335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.032450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.032476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.032586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.032613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.032691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.032722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.032839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.032866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.033013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.033039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.033155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.033181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.033298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.033325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.033442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.033468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.033588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.033613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.033706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.033739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.033871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.033897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.034017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.034043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.034155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.034182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.034291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.034318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.034408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.034434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.034517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.034543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.034685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.030 [2024-11-15 12:48:11.034712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.030 qpair failed and we were unable to recover it. 00:26:31.030 [2024-11-15 12:48:11.034808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.034835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.034922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.034948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.035054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.035081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.035196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.035223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.035337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.035366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.035485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.035512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.035598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.035624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.035742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.035771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.035883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.035910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.036021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.036047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.036160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.036186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.036262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.036288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.036401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.036427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.036519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.036546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.036695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.036728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.036853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.036885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.037007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.037034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.037116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.037142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.037254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.037280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.037370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.037396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.037511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.037538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.037655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.037681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.037839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.037866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.037955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.037981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.038096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.038122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.038252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.038281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.038399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.038426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.038512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.038539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.038653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.038679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.038806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.038834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.038953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.038979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.039065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.039091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.039197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.039223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.039336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.039363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.039456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.039482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.039580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.039606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.039748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.039775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.039888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.031 [2024-11-15 12:48:11.039915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.031 qpair failed and we were unable to recover it. 00:26:31.031 [2024-11-15 12:48:11.040061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.040087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.040173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.040199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.040284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.040310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.040420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.040446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.040556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.040584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.040702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.040734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.040853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.040879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.040968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.041004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.041095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.041122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.041240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.041266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.041347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.041374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.041483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.041509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.041605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.041630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.041723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.041750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.041847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.041873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.041983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.042009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.042108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.042135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.042231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.042261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.042379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.042406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.042493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.042520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.042605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.042631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.042747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.042773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.042864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.042891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.043004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.043030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.043142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.043169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.043254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.043280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.043419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.043445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.043538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.043564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.043699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.043731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.043848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.043875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.043965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.043992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.044077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.044104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.044192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.044218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.044332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.044358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.044479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.044507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.044589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.044616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.044757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.032 [2024-11-15 12:48:11.044784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.032 qpair failed and we were unable to recover it. 00:26:31.032 [2024-11-15 12:48:11.044878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.044904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.045024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.045050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.045136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.045162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.045273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.045299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.045445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.045471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.045613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.045639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.045727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.045754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.045883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.045919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.046060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.046088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.046178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.046205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.046313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.046339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.046421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.046447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.046535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.046561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.046645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.046671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.046762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.046792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.046904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.046931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.047044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.047070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.047149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.047175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.047287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.047313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.047403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.047429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.047547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.047578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.047694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.047731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.047845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.047872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.047957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.047983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.048097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.048122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.048237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.048265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.048400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.048427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.048517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.048543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.048695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.048741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.048898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.048933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.049016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.049043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.049158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.049184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.049293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.049320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.049412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.049439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.049539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.049567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.049688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.049715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.049892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-11-15 12:48:11.049919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.033 qpair failed and we were unable to recover it. 00:26:31.033 [2024-11-15 12:48:11.050036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.050064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.050160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.050186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.050301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.050327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.050414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.050440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.050553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.050578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.050664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.050690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.050823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.050852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.050937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.050963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.051077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.051103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.051186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.051212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.051338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.051365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.051483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.051509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.051591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.051619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.051734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.051761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.051842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.051868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.051951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.051978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.052089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.052115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.052233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.052260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.052377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.052404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.052524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.052550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.052662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.052688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.052811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.052838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.052952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.052979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.053099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.053129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.053218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.053244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.053337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.053362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.053443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.053469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.053612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.053637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.053753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.053780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.053864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.053891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.054036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.054062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.054173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.054199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.054287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.054314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.054401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.054429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.054515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.054542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.054652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.054678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.054803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.054830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.034 [2024-11-15 12:48:11.054969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-11-15 12:48:11.055009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.034 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.055128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.055155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.055270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.055297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.055374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.055400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.055512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.055538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.055658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.055684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.055778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.055804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.055919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.055945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.056025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.056050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.056164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.056190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.056302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.056327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.056456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.056494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.056641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.056670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.056764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.056795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.056879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.056904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.056984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.057009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.057148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.057173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.057281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.057306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.057382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.057407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.057541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.057566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.057676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.057701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.057802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.057829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.057915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.057941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.058127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.058160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.058347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.058381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.058522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.058554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.058715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.058758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.058882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.058919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.059039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.059065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.059205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.059229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.059363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.059388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.059473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.059497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.059583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.059607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.035 [2024-11-15 12:48:11.059726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.035 [2024-11-15 12:48:11.059752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.035 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.059834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.059861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.059974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.060000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.060107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.060133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.060245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.060270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.060388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.060413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.060502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.060527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.060602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.060632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.060763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.060803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.060934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.060963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.061106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.061135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.061250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.061277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.061363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.061391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.061486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.061512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.061626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.061655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.061786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.061814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.061932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.061966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.062088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.062115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.062266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.062293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.062388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.062416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.062559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.062586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.062680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.062707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.062803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.062828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.062938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.062964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.063051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.063076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.063154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.063180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.063292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.063322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.063403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.063430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.063544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.063574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.063704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.063744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.063860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.063888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.063988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.064016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.064123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.064149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.064243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.064273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.064385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.064412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.064527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.064558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.064652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.064681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.064809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.064837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.036 qpair failed and we were unable to recover it. 00:26:31.036 [2024-11-15 12:48:11.064952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.036 [2024-11-15 12:48:11.064983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.065093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.065120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.065241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.065270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.065394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.065422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.065513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.065539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.065627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.065657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.065745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.065778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.065862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.065889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.065978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.066010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.066099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.066140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.066230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.066256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.066369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.066403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.066502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.066541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.066629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.066656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.066767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.066794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.066879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.066905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.067048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.067081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.067252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.067299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.067384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.067410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.067525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.067551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.067692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.067723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.067807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.067833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.067943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.067969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.068059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.068085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.068230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.068278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.068375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.068408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.068545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.068578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.068745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.068771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.068880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.068905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.068993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.069019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.069096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.069121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.069262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.069287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.069402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.069427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.069573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.069602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.069714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.069759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.069848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.069878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.069971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.070009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.070096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.070124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.037 qpair failed and we were unable to recover it. 00:26:31.037 [2024-11-15 12:48:11.070208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.037 [2024-11-15 12:48:11.070236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.070351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.070378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.070489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.070516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.070607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.070633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.070712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.070745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.070857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.070882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.070968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.070995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.071105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.071138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.071320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.071352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.071457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.071490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.071598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.071624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.071705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.071736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.071828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.071853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.071964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.071989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.072062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.072087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.072169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.072194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.072368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.072400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.072541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.072574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.072702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.072736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.072854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.072880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.072955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.072980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.073094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.073119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.073232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.073257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.073395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.073420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.073533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.073558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.073674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.073708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.073840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.073867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.074004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.074033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.074172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.074199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.074314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.074344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.074450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.074476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.074615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.074643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.074757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.074785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.074935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.074963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.075080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.075106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.075205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.075232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.075349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.075377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.038 [2024-11-15 12:48:11.075472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.038 [2024-11-15 12:48:11.075499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.038 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.075582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.075613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.075770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.075798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.075889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.075927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.076026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.076052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.076158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.076184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.076300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.076325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.076459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.076485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.076595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.076621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.076702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.076738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.076894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.076924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.077024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.077051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.077131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.077157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.077238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.077266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.077370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.077397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.077482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.077512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.077639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.077668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.077780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.077808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.077933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.077967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.078064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.078090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.078179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.078207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.078322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.078350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.078465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.078491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.078607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.078633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.078747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.078773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.078887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.078912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.079050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.079076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.079186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.079211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.079324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.079350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.079440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.079466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.079575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.079601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.079694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.079725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.079809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.079835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.079923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.079949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.080116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.080149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.080280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.080313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.039 [2024-11-15 12:48:11.080433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.039 [2024-11-15 12:48:11.080491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.039 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.080604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.080631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.080728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.080756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.080868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.080896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.080978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.081004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.081147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.081175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.081271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.081298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.081438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.081464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.081550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.081576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.081653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.081678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.081777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.081803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.081948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.081974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.082076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.082109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.082256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.082289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.082457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.082490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.082617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.082643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.082763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.082789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.082931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.082956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.083071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.083120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.083262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.083295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.083456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.083489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.083628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.083654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.083739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.083765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.083856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.083881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.083961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.083987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.084109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.084149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.084265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.084298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.084451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.084498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.084639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.084671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.084847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.084873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.084981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.085006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.085146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.085171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.085263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.085288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.085373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.085408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.040 [2024-11-15 12:48:11.085505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.040 [2024-11-15 12:48:11.085540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.040 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.085678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.085706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.085841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.085867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.085975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.086000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.086136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.086161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.086256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.086317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.086476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.086508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.086621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.086666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.086809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.086835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.086951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.086976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.087098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.087123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.087277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.087309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.087451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.087484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.087629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.087655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.087777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.087817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.087956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.087984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.088125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.088173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.088261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.088289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.088413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.088460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.088568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.088601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.088743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.088769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.088912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.088938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.089033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.089059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.089228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.089261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.089451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.089484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.089647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.089710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.089848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.089877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.089954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.089979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.090072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.090097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.090296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.090329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.090476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.090509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.090646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.090671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.090813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.090840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.090929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.090974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.091115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.091147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.091317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.091350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.091461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.091494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.091652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.091677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.041 qpair failed and we were unable to recover it. 00:26:31.041 [2024-11-15 12:48:11.091802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.041 [2024-11-15 12:48:11.091832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.091979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.092007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.092116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.092164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.092282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.092330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.092482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.092530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.092623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.092650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.092742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.092769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.092905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.092931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.093041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.093066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.093214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.093254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.093397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.093440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.093602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.093627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.093739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.093765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.093846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.093871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.093963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.093988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.094125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.094164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.094270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.094302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.094457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.094490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.094630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.094655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.094762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.094788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.094868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.094895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.095035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.095060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.095143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.095168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.095251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.095277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.095440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.095466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.095613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.095646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.095795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.095821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.095935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.095960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.096038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.096065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.096181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.096207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.096300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.096330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.096467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.096519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.096660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.096687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.096806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.096834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.096948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.096976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.097056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.097087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.097199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.097225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.097316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.042 [2024-11-15 12:48:11.097342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.042 qpair failed and we were unable to recover it. 00:26:31.042 [2024-11-15 12:48:11.097460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.097486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.097563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.097589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.097728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.097755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.097834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.097860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.098009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.098047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.098194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.098227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.098364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.098397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.098557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.098608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.098756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.098790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.098883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.098909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.099023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.099051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.099188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.099236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.099369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.099424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.099520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.099547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.099659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.099684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.099776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.099802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.099890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.099916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.100029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.100054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.100152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.100185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.100333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.100370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.100506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.100540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.100658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.100685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.100805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.100837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.100985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.101012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.101134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.101166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.101260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.101286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.101426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.101451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.101542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.101567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.101685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.101710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.101809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.101834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.101942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.101967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.102109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.102160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.102255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.102288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.102383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.102416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.102574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.102602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.102683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.102710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.102843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.102871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.043 [2024-11-15 12:48:11.102981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.043 [2024-11-15 12:48:11.103031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.043 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.103180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.103231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.103360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.103407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.103518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.103544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.103631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.103657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.103779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.103805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.103899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.103924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.104093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.104127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.104246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.104281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.104432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.104469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.104603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.104630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.104744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.104772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.104910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.104937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.105021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.105052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.105192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.105240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.105355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.105382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.105500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.105526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.105685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.105726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.105877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.105902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.106022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.106047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.106131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.106156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.106262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.106314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.106493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.106541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.106661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.106689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.106838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.106865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.106955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.106983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.107135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.107183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.107321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.107349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.107437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.107462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.107575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.107609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.107728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.107755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.107839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.107864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.107943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.107987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.108086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.044 [2024-11-15 12:48:11.108119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.044 qpair failed and we were unable to recover it. 00:26:31.044 [2024-11-15 12:48:11.108291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.108323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.108467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.108502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.108636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.108661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.108765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.108791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.108872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.108898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.109014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.109039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.109161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.109187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.109322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.109355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.109458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.109491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.109633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.109658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.109774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.109800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.109920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.109945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.110086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.110112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.110195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.110221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.110368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.110414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.110556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.110600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.110697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.110734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.110862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.110889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.111003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.111031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.111113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.111139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.111251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.111284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.111400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.111427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.111531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.111557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.111677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.111703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.111798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.111823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.111938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.111964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.112082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.112107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.112186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.112212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.112355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.112381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.112582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.112607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.112694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.112731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.112842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.112868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.112948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.112974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.113062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.113089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.113206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.113232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.113377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.113423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.113591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.113636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.045 [2024-11-15 12:48:11.113712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.045 [2024-11-15 12:48:11.113745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.045 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.113858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.113883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.113965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.113991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.114078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.114123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.114254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.114292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.114469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.114516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.114663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.114690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.114844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.114871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.114969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.114997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.115177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.115226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.115348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.115382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.115517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.115551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.115694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.115733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.115841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.115882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.115989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.116021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.116135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.116168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.116351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.116384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.116525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.116558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.116681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.116715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.116859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.116884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.117025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.117058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.117164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.117197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.117378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.117437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.117581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.117608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.117728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.117758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.117870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.117897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.117981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.118007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.118156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.118204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.118337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.118371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.118474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.118506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.118674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.118707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.118839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.118869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.119023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.119071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.119164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.119192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.119286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.119313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.119422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.119448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.119589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.119617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.119739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.046 [2024-11-15 12:48:11.119767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.046 qpair failed and we were unable to recover it. 00:26:31.046 [2024-11-15 12:48:11.119881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.119908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.120026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.120052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.120168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.120196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.120288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.120314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.120423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.120455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.120543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.120571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.120651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.120683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.120806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.120834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.120912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.120939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.121054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.121085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.121202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.121229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.121353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.121385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.121503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.121530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.121649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.121683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.121815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.121854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.121942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.121969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.122057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.122083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.122245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.122278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.122399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.122431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.122535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.122567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.122692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.122729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.122857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.122884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.123000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.123027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.123164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.123211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.123347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.123374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.123495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.123523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.123655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.123694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.123806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.123852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.123980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.124018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.124170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.124208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.124338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.124367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.124486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.124513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.124634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.124661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.124745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.124776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.124864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.124890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.125001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.125051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.125195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.125227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.047 [2024-11-15 12:48:11.125361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.047 [2024-11-15 12:48:11.125394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.047 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.125568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.125601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.125729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.125765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.125880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.125905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.125992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.126017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.126097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.126122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.126304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.126337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.126469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.126502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.126660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.126685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.126803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.126829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.126928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.126953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.127070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.127095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.127177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.127203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.127339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.127373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.127511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.127545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.127684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.127732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.127866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.127892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.127998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.128023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.128183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.128209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.128310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.128344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.128489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.128522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.128648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.128673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.128771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.128797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.128880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.128910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.128993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.129018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.129145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.129177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.129307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.129340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.129449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.129484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.129634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.129660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.129747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.129774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.129887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.129913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.130037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.130069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.130225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.130259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.130395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.130429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.130529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.130562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.130703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.130743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.130872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.130898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.130980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.131006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.048 [2024-11-15 12:48:11.131112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.048 [2024-11-15 12:48:11.131137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.048 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.131248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.131293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.131411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.131436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.131589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.131634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.131772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.131798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.131877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.131902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.132010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.132036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.132177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.132202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.132313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.132338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.132422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.132447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.132527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.132552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.132663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.132706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.132823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.132852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.132940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.132965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.133078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.133103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.133189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.133214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.133326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.133352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.133470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.133495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.133581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.133606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.133745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.133771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.133886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.133912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.134074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.134108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.134207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.134240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.134342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.134374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.134516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.134549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.134687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.134735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.134836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.134885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.135036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.135071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.135230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.135259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.135454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.135489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.135591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.135623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.135759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.135784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.135895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.049 [2024-11-15 12:48:11.135921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.049 qpair failed and we were unable to recover it. 00:26:31.049 [2024-11-15 12:48:11.136044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.136089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.136208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.136241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.136381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.136413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.136525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.136568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.136684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.136734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.136842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.136867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.136966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.136999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.137146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.137179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.137326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.137368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.137501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.137534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.137684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.137709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.137829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.137855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.137967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.138016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.138111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.138144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.138275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.138308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.138442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.138475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.138575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.138608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.138713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.138773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.138889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.138917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.139037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.139069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.139188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.139245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.139434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.139459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.139590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.139615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.139729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.139755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.139867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.139893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.139969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.139993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.140065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.140091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.140267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.140301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.140438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.140471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.140633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.140662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.140803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.140833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.140942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.140978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.141147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.141181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.141323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.141368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.141517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.141561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.141703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.141735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.141825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.141851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.141944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.050 [2024-11-15 12:48:11.141970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.050 qpair failed and we were unable to recover it. 00:26:31.050 [2024-11-15 12:48:11.142120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.142146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.142269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.142303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.142454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.142479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.142587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.142612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.142698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.142739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.142833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.142858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.142968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.142994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.143101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.143135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.143270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.143304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.143420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.143474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.143589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.143619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.143707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.143750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.143844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.143875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.143955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.143981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.144117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.144143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.144304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.144339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.144510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.144543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.144654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.144678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.144769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.144795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.144882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.144908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.145038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.145071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.145196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.145229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.145330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.145375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.145494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.145527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.145665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.145698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.145833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.145859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.145974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.145999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.146153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.146200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.146316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.146362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.146463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.146495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.146643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.146669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.146768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.146794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.146909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.146934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.147092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.147125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.147298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.147331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.147470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.147503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.147640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.147683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.051 [2024-11-15 12:48:11.147787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.051 [2024-11-15 12:48:11.147816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.051 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.147934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.147969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.148147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.148192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.148334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.148373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.148517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.148551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.148668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.148712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.148840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.148892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.149066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.149100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.149249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.149300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.149512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.149548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.149668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.149712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.149837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.149865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.150006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.150060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.150166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.150235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.150491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.150536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.150631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.150658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.150772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.150805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.150975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.151014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.151197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.151243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.151448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.151489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.151619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.151679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.151822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.151879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.152086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.152139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.152278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.152332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.152528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.152555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.152697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.152733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.152839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.152871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.153017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.153051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.153191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.153225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.153373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.153408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.153563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.153597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.153742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.153777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.153876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.153903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.154020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.154054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.154210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.154244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.154383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.154430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.154596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.154632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.154787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.154814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.052 qpair failed and we were unable to recover it. 00:26:31.052 [2024-11-15 12:48:11.154952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.052 [2024-11-15 12:48:11.154979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.155100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.155126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.155264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.155291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.155438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.155473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.155633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.155660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.155801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.155828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.155928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.155956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.156108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.156143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.156314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.156375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.156513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.156547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.156715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.156774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.156895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.156931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.157067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.157101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.157212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.157264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.157423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.157464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.157603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.157660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.157787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.157816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.157908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.157935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.158096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.158132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.158270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.158306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.158451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.158497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.158650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.158685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.158812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.158840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.158966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.159001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.159145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.159180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.159332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.159382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.159475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.159510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.159615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.159669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.159864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.159914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.160033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.160072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.160297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.160333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.160487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.160515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.160657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.160684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.160845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.053 [2024-11-15 12:48:11.160878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.053 qpair failed and we were unable to recover it. 00:26:31.053 [2024-11-15 12:48:11.161030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.161064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.161197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.161234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.161404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.161439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.161600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.161639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.161791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.161821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.161931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.161958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.162054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.162081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.162256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.162307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.162492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.162546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.162706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.162746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.162857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.162886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.162976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.163003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.163119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.163147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.163382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.163417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.163556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.163614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.163760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.163789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.163907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.163934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.164102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.164151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.164297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.164334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.164473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.164508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.164647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.164680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.164862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.164888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.164972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.164998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.165133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.165174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.165347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.165381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.165560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.165611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.165743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.165788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.165880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.165906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.166049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.166083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.166187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.166221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.166342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.166374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.166542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.166575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.166765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.166809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.166897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.166923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.167134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.167167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.167388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.167428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.167586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.054 [2024-11-15 12:48:11.167616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.054 qpair failed and we were unable to recover it. 00:26:31.054 [2024-11-15 12:48:11.167756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.167786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.167922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.167952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.168087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.168117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.168341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.168376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.168518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.168552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.168674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.168700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.168794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.168819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.168934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.168959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.169080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.169106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.169248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.169281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.169430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.169480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.169678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.169763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.169897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.169925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.170037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.170071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.170226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.170262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.170383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.170421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.170549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.170598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.170705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.170738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.170851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.170876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.170994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.171020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.171098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.171124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.171204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.171230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.171406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.171445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.171618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.171656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.171786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.171814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.171925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.171951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.172035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.172082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.172223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.172258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.172390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.172437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.172599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.172638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.172811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.172837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.172984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.173017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.173182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.173225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.173336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.173369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.173470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.173523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.173684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.173710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.173802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.173827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.173908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.055 [2024-11-15 12:48:11.173933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.055 qpair failed and we were unable to recover it. 00:26:31.055 [2024-11-15 12:48:11.174115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.174148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.174290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.174323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.174470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.174508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.174716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.174775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.174859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.174884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.175044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.175077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.175215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.175247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.175432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.175468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.175647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.175681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.175853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.175879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.176028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.176053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.176198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.176233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.176369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.176403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.176564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.176614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.176771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.176801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.176943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.176973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.177127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.177160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.177341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.177378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.177540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.177578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.177731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.177758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.177865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.177891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.177976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.178001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.178118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.178162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.178299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.178332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.178518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.178551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.178666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.178699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.178848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.178873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.178996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.179021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.179143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.179176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.179356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.179390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.179557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.179590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.179733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.179759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.179875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.179900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.180021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.180054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.180163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.180197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.180347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.180372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.180490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.180523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.180700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.180733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.180846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.180871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.180950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.056 [2024-11-15 12:48:11.180976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.056 qpair failed and we were unable to recover it. 00:26:31.056 [2024-11-15 12:48:11.181103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.181146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.181310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.181359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.181606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.181635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.181742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.181770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.181884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.181912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.182030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.182057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.182194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.182241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.182390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.182424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.182611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.182639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.182753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.182781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.182898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.182928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.183119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.183153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.183294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.183338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.183490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.183531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.183640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.183677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.183834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.183862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.183947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.183973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.184099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.184130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.184318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.184353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.184484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.184540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.184663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.184690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.184847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.184876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.184965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.184990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.185096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.185124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.185324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.185360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.185524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.185567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.185715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.185749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.185865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.185892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.186042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.186079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.186258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.186293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.186491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.186528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.186647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.186684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.186855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.186883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.057 [2024-11-15 12:48:11.186976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.057 [2024-11-15 12:48:11.187003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.057 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.187119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.187173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.187351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.187389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.187510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.187569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.187693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.187730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.187832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.187858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.187987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.188025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.188221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.188269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.188455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.188496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.188651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.188682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.188855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.188883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.189005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.189057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.189187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.189223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.189372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.189429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.189557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.189610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.189790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.189817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.189931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.189978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.190125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.190162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.190307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.190345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.190492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.190529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.190677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.190750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.190915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.190953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.191099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.191127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.191263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.191298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.191445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.191480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.191589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.191614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.191752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.191778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.191915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.191940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.192050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.192076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.192238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.192273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.192424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.192467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.192644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.192670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.192780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.192806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.192892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.192917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.193096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.193142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.193319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.193366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.193539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.193594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.193765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.193792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.193888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.193913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.194041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.194067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.194222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.058 [2024-11-15 12:48:11.194256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.058 qpair failed and we were unable to recover it. 00:26:31.058 [2024-11-15 12:48:11.194399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.194433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.194594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.194619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.194734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.194779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.194886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.194911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.195047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.195081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.195237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.195272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.195406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.195431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.195619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.195654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.195802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.195828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.195945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.195971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.196053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.196078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.196243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.196278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.196420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.196455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.196577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.196603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.196723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.196749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.196885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.196910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.197018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.197052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.197201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.197236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.197408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.197443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.197624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.197670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.197772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.197808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.197979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.198030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.198222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.198261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.198376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.198434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.198576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.198612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.198750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.198788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.198910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.198936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.199056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.199081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.199169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.199212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.199356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.199392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.199508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.199544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.199704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.199736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.199815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.199840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.199968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.200002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.200185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.200221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.200362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.200403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.200548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.200582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.200694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.200737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.200843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.200868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.201032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.201066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.201206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.201254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.059 qpair failed and we were unable to recover it. 00:26:31.059 [2024-11-15 12:48:11.201396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.059 [2024-11-15 12:48:11.201430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.201601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.201635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.201736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.201762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.201839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.201863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.201973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.201999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.202094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.202128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.202316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.202362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.202518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.202557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.202746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.202791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.202889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.202917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.203008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.203035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.203121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.203152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.203304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.203340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.203518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.203569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.203686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.203711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.203798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.203824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.203945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.203980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.204086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.204121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.204300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.204351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.204557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.204608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.204775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.204800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.204896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.204921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.205035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.205082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.205255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.205289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.205396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.205431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.205537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.205582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.205691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.205730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.205840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.205865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.205975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.206001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.206112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.206162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.206358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.206392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.206498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.206533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.206650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.206675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.206828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.206858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.207043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.207080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.207194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.207230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.207383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.207429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.207594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.207632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.207825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.207853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.207964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.207990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.208157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.208194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.060 qpair failed and we were unable to recover it. 00:26:31.060 [2024-11-15 12:48:11.208372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.060 [2024-11-15 12:48:11.208408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.208523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.208567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.208689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.208714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.208860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.208885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.209001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.209026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.209105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.209130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.209289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.209323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.209424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.209459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.209608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.209634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.209747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.209773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.209879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.209927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.210071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.210105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.210258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.210283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.210465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.210499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.210638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.210672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.210824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.210877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.211030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.211084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.211257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.211294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.211425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.211459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.211617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.211646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.211739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.211775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.211916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.211953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.212085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.212121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.212289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.212327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.212473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.212510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.212683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.212709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.212827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.212852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.213004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.213039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.213184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.213219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.213404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.213429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.213543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.213577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.213750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.213792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.213927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.213962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.214110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.214145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.214249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.214297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.214476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.214510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.214643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.214669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.214782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.214808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.214921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.214946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.215049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.215082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.215221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.061 [2024-11-15 12:48:11.215255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.061 qpair failed and we were unable to recover it. 00:26:31.061 [2024-11-15 12:48:11.215359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.215407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.215570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.215602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.215711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.215754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.215876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.215902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.216022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.216047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.216230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.216263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.216430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.216479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.216606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.216639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.216809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.216835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.217016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.217049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.217182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.217225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.217367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.217401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.217514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.217553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.217699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.217740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.217861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.217888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.217969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.218002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.218174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.218212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.218364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.218398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.218514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.218548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.218737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.218782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.218914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.218941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.219056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.219082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.219267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.219305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.219455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.219491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.219649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.219677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.219805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.219856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.220006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.220063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.220240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.220276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.220455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.220492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.220629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.220665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.220850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.220889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.221006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.221042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.221252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.221288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.221390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.221425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.221553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.221593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.221737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.062 [2024-11-15 12:48:11.221764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.062 qpair failed and we were unable to recover it. 00:26:31.062 [2024-11-15 12:48:11.221878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.221904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.222075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.222124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.222250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.222285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.222396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.222444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.222615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.222648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.222801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.222827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.222972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.223005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.223140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.223173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.223323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.223356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.223492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.223541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.223634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.223664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.223813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.223839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.223952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.223977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.224098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.224131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.224263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.224296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.224467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.224500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.224631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.224664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.224850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.224876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.224987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.225031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.225175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.225219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.225325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.225357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.225503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.225548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.225660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.225686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.225839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.225865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.225977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.226021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.226162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.226208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.226387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.226423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.226582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.226609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.226688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.226714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.226833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.226859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.226976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.227018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.227122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.227155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.227292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.227339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.227470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.227503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.227639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.227672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.227824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.227849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.227956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.227998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.228160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.228204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.228348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.228384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.228610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.228636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.228778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.228805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.063 qpair failed and we were unable to recover it. 00:26:31.063 [2024-11-15 12:48:11.228980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.063 [2024-11-15 12:48:11.229013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.229176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.229209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.229342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.229375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.229477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.229510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.229631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.229657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.229861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.229887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.229969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.230015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.230172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.230206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.230373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.230406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.230578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.230611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.230752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.230795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.230921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.230954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.231094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.231127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.231285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.231322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.231538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.231596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.231733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.231778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.231890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.231916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.232054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.232090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.232202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.232238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.232348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.232387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.232554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.232582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.232731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.232759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.232854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.232881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.232974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.233001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.233163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.233205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.233366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.233400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.233526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.233560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.233737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.233765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.233853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.233879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.233995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.234046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.234183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.234216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.234398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.234446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.234603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.234639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.234773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.234798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.234910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.234935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.235056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.235103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.235247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.235280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.235417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.235463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.235637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.235663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.235752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.235778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.235890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.235915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.064 qpair failed and we were unable to recover it. 00:26:31.064 [2024-11-15 12:48:11.236028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.064 [2024-11-15 12:48:11.236076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.236244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.236277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.236392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.236426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.236563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.236596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.236799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.236825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.236934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.236959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.237089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.237126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.237279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.237314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.237453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.237489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.237605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.237642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.237823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.237849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.237957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.237982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.238109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.238145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.238290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.238325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.238506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.238553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.238738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.238782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.238895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.238920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.239092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.239129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.239284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.239327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.239480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.239516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.239624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.239660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.239817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.239843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.239928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.239953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.240120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.240156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.240326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.240351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.240503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.240540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.240728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.240779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.240895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.240920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.241005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.241030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.241176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.241212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.241398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.241433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.241661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.241711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.241896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.241921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.242061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.242087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.242170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.242195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.242326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.242363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.242480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.242516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.242699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.242745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.242884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.242909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.242997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.243023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.243161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.243187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.065 [2024-11-15 12:48:11.243350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.065 [2024-11-15 12:48:11.243386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.065 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.243507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.243543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.243734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.243781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.243864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.243889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.244027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.244052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.244207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.244244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.244362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.244399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.244520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.244557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.244760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.244798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.244923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.244949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.245084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.245110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.245196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.245221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.245297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.245323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.245461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.245486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.245558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.245583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.245711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.245759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.245901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.245938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.246117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.246152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.246298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.246334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.246482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.246518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.246691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.246729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.246866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.246900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.247061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.247097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.247237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.247272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.247427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.247463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.247614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.247649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.247802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.247839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.247974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.248010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.248155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.248190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.248341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.248376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.248525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.248561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.248716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.248762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.248915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.248951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.249094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.249130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.249280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.249316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.249495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.249536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.249727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.066 [2024-11-15 12:48:11.249765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.066 qpair failed and we were unable to recover it. 00:26:31.066 [2024-11-15 12:48:11.249945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.249981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.250124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.250160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.250313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.250349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.250469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.250507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.250689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.250743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.250869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.250905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.251087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.251137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.251338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.251374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.251528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.251564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.251681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.251727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.251881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.251917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.252035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.252072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.252234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.252270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.252417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.252450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.252578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.252611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.252752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.252788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.252975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.253013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.253167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.253205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.253389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.253427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.253581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.253618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.253782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.253821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.253979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.254017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.254132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.254170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.254339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.254377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.254564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.254602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.254715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.254767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.254931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.254969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.255121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.255158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.255319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.255360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.067 [2024-11-15 12:48:11.255519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.067 [2024-11-15 12:48:11.255557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.067 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.255684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.255728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.255893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.255932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.256085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.256122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.256233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.256270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.256432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.256470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.256612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.256651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.256817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.256856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.256970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.257007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.257141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.257179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.257306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.257344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.257446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.257483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.257642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.257680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.257853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.257892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.258049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.258086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.258247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.258284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.258386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.258424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.258574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.258611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.258739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.258779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.258932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.258970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.259113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.259150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.259301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.259340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.259484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.259522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.259673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.259706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.259863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.259898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.260056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.260094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.260266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.260304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.260449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.260488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.260657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.260689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.260877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.260916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.261063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.261091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.261229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.261255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.261339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.261365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.261489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.261515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.261629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.261655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.261778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.261805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.261954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.261988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.262100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.068 [2024-11-15 12:48:11.262133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.068 qpair failed and we were unable to recover it. 00:26:31.068 [2024-11-15 12:48:11.262276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.262309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.262409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.262442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.262561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.262594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.262732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.262765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.262905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.262939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.263102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.263141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.263304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.263341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.263479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.263512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.263655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.263687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.263861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.263905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.264097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.264135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.264246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.264284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.264428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.264466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.264670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.264696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.264788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.264815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.264929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.264954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.265067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.265105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.265257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.265295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.265428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.265453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.265593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.265618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.265728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.265754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.265860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.265885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.266086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.266124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.266227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.266266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.266454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.266492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.266644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.266682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.266877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.266922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.267112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.267151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.267247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.267275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.267393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.069 [2024-11-15 12:48:11.267420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.069 qpair failed and we were unable to recover it. 00:26:31.069 [2024-11-15 12:48:11.267527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.267554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.267669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.267696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.267833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.267860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.267971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.267997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.268109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.268135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.268290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.268328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.268407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.268433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.268548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.268574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.268690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.268715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.268808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.268833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.268952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.268978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.269118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.269160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.269268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.269293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.269409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.269434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.269523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.269551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.269654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.269693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.269836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.269871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.270015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.270049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.270192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.270225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.270392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.270425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.270565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.270598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.270744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.270772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.270885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.270911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.270997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.271024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.271108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.271134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.271223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.271249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.271339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.271365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.271480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.271506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.271623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.271649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.271756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.271783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.271871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.271897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.272025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.272051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.070 [2024-11-15 12:48:11.272157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.070 [2024-11-15 12:48:11.272183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.070 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.272324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.272350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.272463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.272496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.272646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.272679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.272808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.272847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.272991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.273024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.273200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.273233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.273368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.273401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.273534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.273567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.273677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.273711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.273860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.273893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.274038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.274078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.274189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.274215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.274372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.274405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.274549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.274581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.274725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.274759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.274907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.274936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.275030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.275056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.275176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.275202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.275318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.275345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.275425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.275452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.275566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.275591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.275699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.275732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.275849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.275875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.275959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.275985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.276074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.276101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.276187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.276225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.276344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.276373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.276484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.276511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.276621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.276647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.276757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.071 [2024-11-15 12:48:11.276784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.071 qpair failed and we were unable to recover it. 00:26:31.071 [2024-11-15 12:48:11.276902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.276934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.277013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.277039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.277149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.277175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.277264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.277289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.277411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.277438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.277555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.277580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.277669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.277697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.277817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.277844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.277960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.277987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.278071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.278097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.278202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.278228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.278313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.278339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.278433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.278472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.278586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.278613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.278736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.278763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.278873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.278898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.278980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.279008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.279149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.279175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.279261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.279289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.279379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.279405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.279546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.279572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.279653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.279678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.279807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.279834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.279950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.279977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.280116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.280141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.280227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.280251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.280365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.280391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.280518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.280562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.280707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.280746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.280834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.280859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.280966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.280992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.281106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.281133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.281243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.281269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.072 [2024-11-15 12:48:11.281383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.072 [2024-11-15 12:48:11.281409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.072 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.281525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.281555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.281648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.281674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.281765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.281792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.281909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.281934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.282074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.282099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.282213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.282237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.282333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.282360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.282485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.282511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.282656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.282683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.282808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.282835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.282979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.283005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.283088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.283114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.283230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.283256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.283375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.283401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.283543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.283571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.283687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.283713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.283791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.283817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.283957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.283983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.284092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.284117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.284229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.284255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.284371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.284398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.284533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.284571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.284668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.284696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.284791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.284819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.284906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.284933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.285020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.285047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.285123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.285148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.285286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.285312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.285428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.285454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.285585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.285623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.285714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.285749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.073 qpair failed and we were unable to recover it. 00:26:31.073 [2024-11-15 12:48:11.285836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.073 [2024-11-15 12:48:11.285862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.285968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.285994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.286104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.286134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.286241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.286267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.286384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.286410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.286519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.286544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.286657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.286683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.286800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.286827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.286933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.286959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.287075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.287102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.287217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.287245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.287336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.287362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.287472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.287498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.287613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.287641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.287753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.287791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.287907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.287934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.288056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.288082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.288173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.288199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.288284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.288309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.288391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.288418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.288561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.288587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.288699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.288732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.288848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.288875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.289018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.289047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.289183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.289209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.289299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.289325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.289412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.289437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.289562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.289587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.289699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.289734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.289878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.289909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.290023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.290048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.290185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.290210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.074 [2024-11-15 12:48:11.290299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.074 [2024-11-15 12:48:11.290326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.074 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.290417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.290444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.290520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.290546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.290623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.290649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.290766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.290793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.290876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.290902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.291046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.291072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.291185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.291210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.291290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.291315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.291430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.291455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.291526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.291550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.291697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.291744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.291871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.291898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.291976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.292003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.292139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.292165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.292249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.292277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.292388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.292415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.292555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.292580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.292670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.292695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.292812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.292839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.292920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.292946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.293028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.293054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.293195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.293222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.293302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.293328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.293439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.293465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.293586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.293612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.293705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.293737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.293851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.293877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.293993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.294018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.294127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.294153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.294265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.294290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.075 qpair failed and we were unable to recover it. 00:26:31.075 [2024-11-15 12:48:11.294401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.075 [2024-11-15 12:48:11.294427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.294524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.294562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.294679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.294706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.294832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.294858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.294999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.295025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.295139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.295164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.295249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.295280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.295370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.295397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.295508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.295535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.295648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.295673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.295788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.295814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.295897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.295923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.296059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.296085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.296196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.296222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.296339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.296365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.296448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.296474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.296589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.296614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.296742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.296781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.296907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.296935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.297047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.297073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.297160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.297187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.297261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.297287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.297422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.297448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.297561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.297589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.297697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.297729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.297845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.297871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.297983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.298009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.298119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.298145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.298254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.298279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.298359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.298385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.298475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.298501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.298622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.076 [2024-11-15 12:48:11.298661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.076 qpair failed and we were unable to recover it. 00:26:31.076 [2024-11-15 12:48:11.298790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.298819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.298907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.298933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.299048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.299074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.299215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.299242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.299330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.299356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.299445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.299473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.299573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.299611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.299733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.299761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.299846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.299872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.299954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.299980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.300094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.300119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.300231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.300259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.300379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.300407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.300488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.300516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.300600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.300626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.300770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.300798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.300885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.300912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.301052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.301078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.301163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.301188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.301298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.301324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.301413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.301438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.301553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.301579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.301686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.301712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.301811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.301837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.301951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.301976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.302089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.302115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.302255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.302280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.302360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.302387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.302504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.302529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.302644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.302672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.077 qpair failed and we were unable to recover it. 00:26:31.077 [2024-11-15 12:48:11.302766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.077 [2024-11-15 12:48:11.302805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.302930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.302958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.303099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.303126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.303269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.303295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.303407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.303434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.303551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.303579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.303740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.303768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.303910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.303936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.304028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.304054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.304163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.304188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.304331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.304356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.304470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.304503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.304620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.304647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.304741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.304768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.304855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.304881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.304991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.305017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.305131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.305157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.305247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.305273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.305385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.305411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.305555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.305581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.305700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.305733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.305848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.305874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.305989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.306015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.306152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.306177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.306288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.306315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.306459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.306485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.306598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.306624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.306716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.306759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.306877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.306904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.306982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.307008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.307089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.307114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.307225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.307250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.307365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.307391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.307471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.307495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.078 [2024-11-15 12:48:11.307574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.078 [2024-11-15 12:48:11.307600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.078 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.307708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.307745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.307828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.307854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.307975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.308000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.308104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.308134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.308218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.308245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.308356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.308382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.308517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.308543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.308656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.308683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.308774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.308801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.308941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.308978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.309129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.309164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.309381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.309414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.309522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.309555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.309665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.309698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.309808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.309841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.310011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.310044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.310213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.310249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.310366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.310403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.310549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.310585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.310797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.310833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.310953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.310991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.311144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.311180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.311293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.311329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.311540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.311577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.311757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.311795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.311943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.311979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.312171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.312204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.312345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.312378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.312518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.312551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.312687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.312729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.312837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.312875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.313017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.313050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.313205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.313241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.313423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.313459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.313585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.313621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.313773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.313810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-15 12:48:11.313963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-15 12:48:11.313999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.314152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.314188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.314302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.314338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.314452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.314514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.314747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.314820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.315018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.315058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.315205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.315242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.315434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.315472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.315617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.315654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.315816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.315854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.316041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.316078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.316261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.316299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.316447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.316483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.316634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.316672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.316862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.316899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.317049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.317084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.317333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.317401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.317541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.317602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.317764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.317801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.317914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.317950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.318081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.318114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.318253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.318291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.318429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.318462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.318629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.318662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.318799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.318833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.318940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.318977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.319147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.319182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.319348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.319382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.319509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.319543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.319646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.319680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.319824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.319858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.320013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.320047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.320153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.320186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.320316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.320349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.320490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.320522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.320742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.320776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.320945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.320978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.321145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.321181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.321303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-15 12:48:11.321339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-15 12:48:11.321487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.321523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.321632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.321669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.321863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.321899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.322047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.322083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.322222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.322258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.322408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.322444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.322584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.322620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.322737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.322773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.322930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.322965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.323148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.323190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.323331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.323366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.323515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.323551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.323706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.323751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.323857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.323893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.324087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.324123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.324274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.324311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.324449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.324485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.324623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.324659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.324817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.324854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.324975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.325011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.325149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.325185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.325368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.325404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.325553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.325590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.325780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.325818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.325974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.326010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.326150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.326186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.326335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.326371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.326518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.326555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.326709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.326784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.326912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.326948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.327124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.327158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.327299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.327332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.327474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.327508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-15 12:48:11.327645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-15 12:48:11.327679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.327795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.327830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.327974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.328007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.328100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.328139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.328271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.328305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.328441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.328473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.328637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.328670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.328813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.328847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.329030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.329068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.329224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.329262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.329422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.329460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.329671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.329727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.329904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.329941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.330120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.330168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.330341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.330379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.330541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.330578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.330765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.330824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.331015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.331057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.331249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.331289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.331447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.331486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.331675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.331714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.331859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.331899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.332085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.332124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.332280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.332319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.332443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.332482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.332654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.332695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.332831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.332870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.332989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.333027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.333176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.333214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.333341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.333380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.333567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.333611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.333739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.333792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.333991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.334030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.334169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.334211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.334402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.334441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.334569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.334610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.334767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.334807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.334985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.335020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.335155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-15 12:48:11.335190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-15 12:48:11.335334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-15 12:48:11.335368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-15 12:48:11.335511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-15 12:48:11.335546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-15 12:48:11.335685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-15 12:48:11.335731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-15 12:48:11.335849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-15 12:48:11.335884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-15 12:48:11.336023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-15 12:48:11.336058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-15 12:48:11.336182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-15 12:48:11.336218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-15 12:48:11.336326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-15 12:48:11.336360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-15 12:48:11.336508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-15 12:48:11.336542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-15 12:48:11.336652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-15 12:48:11.336686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-15 12:48:11.336864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-15 12:48:11.336899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-15 12:48:11.337039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-15 12:48:11.337074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-15 12:48:11.337216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-15 12:48:11.337250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-15 12:48:11.337391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-15 12:48:11.337425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-15 12:48:11.337533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-15 12:48:11.337568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.364 [2024-11-15 12:48:11.337675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.364 [2024-11-15 12:48:11.337709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.364 qpair failed and we were unable to recover it. 00:26:31.364 [2024-11-15 12:48:11.337883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.364 [2024-11-15 12:48:11.337918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.364 qpair failed and we were unable to recover it. 00:26:31.364 [2024-11-15 12:48:11.338020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.364 [2024-11-15 12:48:11.338056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.364 qpair failed and we were unable to recover it. 00:26:31.364 [2024-11-15 12:48:11.338173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.364 [2024-11-15 12:48:11.338206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.364 qpair failed and we were unable to recover it. 00:26:31.364 [2024-11-15 12:48:11.338336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.364 [2024-11-15 12:48:11.338370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.364 qpair failed and we were unable to recover it. 00:26:31.364 [2024-11-15 12:48:11.338502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.364 [2024-11-15 12:48:11.338537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.364 qpair failed and we were unable to recover it. 00:26:31.364 [2024-11-15 12:48:11.338653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.364 [2024-11-15 12:48:11.338686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.364 qpair failed and we were unable to recover it. 00:26:31.364 [2024-11-15 12:48:11.338839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.364 [2024-11-15 12:48:11.338874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.364 qpair failed and we were unable to recover it. 00:26:31.364 [2024-11-15 12:48:11.339013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.364 [2024-11-15 12:48:11.339048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.364 qpair failed and we were unable to recover it. 00:26:31.364 [2024-11-15 12:48:11.339186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.364 [2024-11-15 12:48:11.339220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.364 qpair failed and we were unable to recover it. 00:26:31.364 [2024-11-15 12:48:11.339321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.364 [2024-11-15 12:48:11.339354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.364 qpair failed and we were unable to recover it. 00:26:31.364 [2024-11-15 12:48:11.339494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.364 [2024-11-15 12:48:11.339527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.364 qpair failed and we were unable to recover it. 00:26:31.364 [2024-11-15 12:48:11.339664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.364 [2024-11-15 12:48:11.339697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.364 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.339845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.339879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.340018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.340053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.340150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.340184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.340336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.340371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.340516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.340555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.340668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.340702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.340841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.340876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.340982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.341015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.341129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.341163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.341305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.341339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.341476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.341509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.341625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.341659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.341805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.341841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.341955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.341989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.342134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.342168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.342306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.342339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.342475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.342509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.342650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.342685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.342856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.342907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.343085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.343122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.343293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.343328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.343440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.343476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.343643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.343677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.343805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.343840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.343953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.343987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.344117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.344151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.344291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.344325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.344495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.344528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.344648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.344682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.344815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.344850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.344993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.345026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.345205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.345240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.345382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.365 [2024-11-15 12:48:11.345418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.365 qpair failed and we were unable to recover it. 00:26:31.365 [2024-11-15 12:48:11.345549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.345584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.345703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.345748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.345878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.345912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.346048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.346081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.346251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.346285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.346453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.346488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.346656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.346690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.346817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.346851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.346987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.347023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.347237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.347278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.347437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.347479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.347675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.347736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.347871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.347912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.348079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.348129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.348236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.348270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.348440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.348474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.348611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.348646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.348786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.348822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.348979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.349030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.349177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.349213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.349381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.349415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.349586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.349620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.349756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.349791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.349932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.349966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.350135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.350170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.350281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.350315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.350461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.350494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.350625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.350659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.350836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.350871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.351044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.351078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.366 [2024-11-15 12:48:11.351213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.366 [2024-11-15 12:48:11.351249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.366 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.351389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.351423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.351566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.351600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.351710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.351754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.351902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.351937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.352078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.352112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.352214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.352249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.352384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.352419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.352522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.352556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.352689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.352731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.352870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.352904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.353035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.353068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.353209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.353244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.353363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.353396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.353572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.353606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.353734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.353771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.353871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.353905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.354074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.354108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.354275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.354309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.354416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.354451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.354597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.354631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.354801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.354844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.354987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.355021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.355190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.355224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.355394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.355429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.355565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.355598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.355736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.355770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.355910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.355945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.356114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.356147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.356312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.356345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.356487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.356521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.356663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.356697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.356877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.356911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.357019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.357053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.367 [2024-11-15 12:48:11.357160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.367 [2024-11-15 12:48:11.357195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.367 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.357372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.357406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.357517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.357552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.357687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.357728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.357863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.357897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.358026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.358059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.358225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.358259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.358426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.358460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.358596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.358630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.358731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.358766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.358906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.358940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.359069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.359103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.359232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.359265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.359407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.359443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.359588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.359625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.359798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.359834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.359945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.359980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.360149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.360182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.360324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.360358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.360527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.360561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.360698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.360744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.360861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.360895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.361059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.361100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.361267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.361310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.361478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.361519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.361729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.361772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.361960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.361994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.362135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.362175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.362316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.362350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.362489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.362523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.362653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.362687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.362880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.362914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.363056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.363090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.363284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.363325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.363487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.363528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-15 12:48:11.363689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.368 [2024-11-15 12:48:11.363744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.363914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.363957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.364068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.364108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.364248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.364290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.364488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.364530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.364740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.364781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.364990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.365031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.365159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.365202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.365363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.365405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.365571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.365613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.365790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.365832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.366009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.366045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.366255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.366297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.366426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.366467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.366624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.366665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.366857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.366900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.367024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.367066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.367269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.367311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.367511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.367552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.367679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.367730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.367935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.368000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.368179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.368244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.368463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.368530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.368716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.368769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.368909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.368949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.369138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.369181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.369348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.369 [2024-11-15 12:48:11.369389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-15 12:48:11.369547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.369591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.369765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.369801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.369921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.369963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.370082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.370119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.370228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.370266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.370427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.370469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.370580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.370622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.370811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.370848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.371023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.371059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.371206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.371241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.371361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.371424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.371602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.371637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.371764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.371799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.371923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.371959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.372067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.372101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.372248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.372289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.372413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.372447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.372617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.372657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.372783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.372818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.372967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.373011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.373170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.373205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.373350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.373390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.373540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.373575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.373715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.373764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.373902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.373937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.374058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.374094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.374247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.374300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.374415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.374454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.374632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.374684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.374850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.374893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.375011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.375045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.375180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.375216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.375354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.375409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-15 12:48:11.375588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.370 [2024-11-15 12:48:11.375631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.375839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.375888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.376028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.376070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.376263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.376304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.376463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.376502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.376667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.376701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.376829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.376862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.377003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.377035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.377209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.377259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.377401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.377439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.377564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.377616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.377792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.377825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.377938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.377971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.378105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.378140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.378249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.378283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.378421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.378455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.378590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.378624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.378786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.378819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.378932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.378965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.379087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.379120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.379274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.379306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.379497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.379548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.379683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.379725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.379886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.379919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.380109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.380142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.380310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.380344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.380525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.380584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.380760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.380794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.380897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.380930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.381036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.381068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.381235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.381268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.382224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.382259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.382406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.382437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.382563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.382592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.382726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.371 [2024-11-15 12:48:11.382756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.371 qpair failed and we were unable to recover it. 00:26:31.371 [2024-11-15 12:48:11.382851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.382881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.382988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.383017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.383140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.383170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.383297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.383327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.383484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.383513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.383643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.383673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.383772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.383802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.383902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.383932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.384090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.384119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.384205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.384234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.384334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.384363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.384519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.384548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.384648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.384678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.384795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.384824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.384926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.384956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.385077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.385106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.385228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.385257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.385382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.385411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.385506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.385540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.385657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.385686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.385830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.385861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.385983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.386013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.386113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.386142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.386269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.386298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.386430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.386459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.386562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.386591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.386686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.386715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.386822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.386851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.386968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.386998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.387114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.387144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.387269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.387298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.387389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.387418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.387580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.387609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.372 [2024-11-15 12:48:11.387747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.372 [2024-11-15 12:48:11.387792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.372 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.387906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.387939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.388073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.388109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.388237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.388268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.388422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.388455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.388593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.388624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.388765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.388807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.388976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.389010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.389188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.389223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.389369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.389405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.389523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.389558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.389769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.389802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.389905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.389944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.390077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.390129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.390355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.390406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.390520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.390575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.390748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.390796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.390924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.390956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.391172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.391215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.391355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.391400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.391629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.391698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.391884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.391917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.392036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.392066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.392184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.392224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.392406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.392442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.392594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.392660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.392840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.392873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.392998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.393035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.393212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.393246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.393382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.393429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.393616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.393670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.393811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.393843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.393978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.394035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.394177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.394207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.394328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.373 [2024-11-15 12:48:11.394360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.373 qpair failed and we were unable to recover it. 00:26:31.373 [2024-11-15 12:48:11.394505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.394576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.394752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.394792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.394884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.394914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.395067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.395109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.395342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.395394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.395547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.395590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.395791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.395825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.395965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.395996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.396104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.396135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.396262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.396292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.396495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.396568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.396768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.396802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.396914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.396946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.398147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.398181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.398319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.398351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.398478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.398507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.398623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.398653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.398760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.398794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.398894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.398929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.399079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.399121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.399250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.399279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.399402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.399430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.399522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.399549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.399698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.399736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.399834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.399861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.399972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.400022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.400199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.400232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.400373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.400406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.400534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.400586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.400689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.400715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.374 qpair failed and we were unable to recover it. 00:26:31.374 [2024-11-15 12:48:11.400819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.374 [2024-11-15 12:48:11.400846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.400961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.400993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.401164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.401204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.401319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.401369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.401538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.401565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.401693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.401729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.401832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.401861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.401965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.402014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.402224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.402257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.402396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.402429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.402615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.402652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.402809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.402837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.402925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.402953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.403152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.403185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.403335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.403374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.403513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.403581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.403744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.403776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.403881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.403910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.404035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.404089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.404203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.404239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.404342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.404370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.404464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.404494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.404650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.404678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.404781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.404808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.404930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.404956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.405043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.405070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.405171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.405218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.405429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.405468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.405596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.405622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.405726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.405754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.405874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.405900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.406012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.406055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.375 qpair failed and we were unable to recover it. 00:26:31.375 [2024-11-15 12:48:11.406261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.375 [2024-11-15 12:48:11.406301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.406458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.406507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.406630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.406657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.406763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.406790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.406884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.406911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.407061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.407093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.407202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.407235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.407347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.407382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.407494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.407527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.407652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.407698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.407827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.407853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.407955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.407982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.408098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.408126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.408264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.408316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.408431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.408479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.408658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.408692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.408858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.408900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.409091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.409144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.409259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.409294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.409457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.409509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.409632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.409660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.409761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.409797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.409924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.409958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.410104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.410137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.410308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.410341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.410586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.410645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.410868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.410895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.410983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.411029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.411173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.411206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.411413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.411488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.411699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.411781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.411898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.411947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.412066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.412099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.412255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.376 [2024-11-15 12:48:11.412294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.376 qpair failed and we were unable to recover it. 00:26:31.376 [2024-11-15 12:48:11.412511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.412544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.412677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.412710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.412885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.412916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.413009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.413048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.413133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.413160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.413332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.413370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.413512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.413552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.413681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.413736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.413850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.413877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.413961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.413988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.414109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.414136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.414215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.414241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.414379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.414412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.414581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.414624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.414781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.414809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.414932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.414959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.415096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.415142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.415264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.415306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.415454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.415504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.415689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.415731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.415854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.415880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.415974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.416002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.416105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.416132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.416244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.416272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.416439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.416480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.416661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.416693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.416829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.416857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.416943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.416970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.417111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.417137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.417255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.417281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.417418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.417463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.417639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.417697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.417843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.417870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.417968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.417995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.418084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.418127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.418257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.418305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.418480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.377 [2024-11-15 12:48:11.418536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.377 qpair failed and we were unable to recover it. 00:26:31.377 [2024-11-15 12:48:11.418669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.418707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.418862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.418889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.418999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.419052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.419205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.419244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.419381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.419420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.419615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.419648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.419790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.419818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.419910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.419937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.420139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.420173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.420321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.420353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.420546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.420586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.420752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.420781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.420893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.420920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.421046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.421078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.421259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.421311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.421505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.421545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.421765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.421793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.421877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.421904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.422024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.422051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.422158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.422185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.422318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.422351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.422522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.422566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.422706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.422769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.422934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.422960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.423096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.423124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.423272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.423300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.423420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.423460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.423644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.423677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.378 [2024-11-15 12:48:11.423848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.378 [2024-11-15 12:48:11.423875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.378 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.423994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.424021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.424105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.424132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.424277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.424304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.424503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.424535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.424639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.424676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.424860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.424887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.425003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.425029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.425122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.425159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.425279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.425312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.425434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.425478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.425600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.425639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.425806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.425836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.425922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.425949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.426067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.426102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.426198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.426225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.426362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.426413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.426572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.426612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.426785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.426813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.426962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.426988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.427134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.427161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.427316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.427344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.427477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.427535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.427677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.427725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.427859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.427887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.428005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.428031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.428109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.428136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.428260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.428297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.428553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.428586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.428690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.428732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.428869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.428895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.428992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.429019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.429111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.429143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.429222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.429248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.379 [2024-11-15 12:48:11.429399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.379 [2024-11-15 12:48:11.429436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.379 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.429579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.429618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.429804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.429832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.429943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.429969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.430076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.430103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.430227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.430253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.430413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.430462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.430582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.430637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.430789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.430817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.430906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.430933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.431023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.431050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.431139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.431165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.431288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.431315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.431404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.431430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.431583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.431610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.431748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.431782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.431898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.431926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.432013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.432040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.432208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.432242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.432356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.432389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.432571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.432623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.432765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.432799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.432906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.432939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.433122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.433161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.433287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.433326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.433451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.433498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.433675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.433745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.433862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.380 [2024-11-15 12:48:11.433896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.380 qpair failed and we were unable to recover it. 00:26:31.380 [2024-11-15 12:48:11.434038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.434072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.434275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.434314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.434489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.434522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.434671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.434704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.434914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.434952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.435126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.435159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.435294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.435327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.435438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.435471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.435669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.435709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.435893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.435925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.436090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.436123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.436284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.436325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.436470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.436509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.436705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.436769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.436902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.436945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.437153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.437186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.437286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.437318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.437415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.437449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.437580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.437613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.437737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.437771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.437917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.437950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.438150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.438190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.438312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.438351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.438531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.438563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.438708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.438749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.438896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.438929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.439036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.439068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.439206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.439239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.439377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.439417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.439578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.439617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.439790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.439824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.439972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.440004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.440182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.440222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.440364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.440398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.440539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.440572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.440750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.381 [2024-11-15 12:48:11.440785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.381 qpair failed and we were unable to recover it. 00:26:31.381 [2024-11-15 12:48:11.440950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.440982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.441161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.441194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.441325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.441381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.441540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.441577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.441765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.441818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.442004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.442049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.442181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.442224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.442427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.442473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.442685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.442729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.442915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.442968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.443139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.443174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.443322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.443358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.443476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.443510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.443677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.443758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.443884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.443921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.444105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.444147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.444339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.444381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.444528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.444562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.444752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.444809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.444955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.444990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.445133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.445169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.445327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.445370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.445555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.445596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.445783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.445841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.445986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.446020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.446167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.446228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.446430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.446472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.446631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.446667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.446862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.446897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.447100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.447152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.447319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.447363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.447493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.447543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.447763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.447807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.447990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.448053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.448215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.382 [2024-11-15 12:48:11.448257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.382 qpair failed and we were unable to recover it. 00:26:31.382 [2024-11-15 12:48:11.448433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.448475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.448648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.448694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.448850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.448892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.449069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.449122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.449273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.449319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.449490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.449532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.449742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.449785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.449983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.450024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.450182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.450226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.450448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.450491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.450614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.450655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.450855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.450900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.451088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.451123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.451294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.451337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.451469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.451503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.451747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.451787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.451930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.451972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.452128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.452168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.452367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.452409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.452520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.452554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.452672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.452706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.452887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.452947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.453114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.453155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.453299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.453339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.453543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.453583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.453750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.453785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.453920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.453952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.454111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.454151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.454295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.454328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.454466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.454499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.454686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.454752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.454945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.454984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.455190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.455224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.455347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.383 [2024-11-15 12:48:11.455381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.383 qpair failed and we were unable to recover it. 00:26:31.383 [2024-11-15 12:48:11.455534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.455573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.455730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.455771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.455927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.455967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.456132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.456172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.456335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.456375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.456542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.456582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.456774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.456813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.456980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.457013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.457123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.457156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.457265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.457321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.457515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.457555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.457732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.457773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.457946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.457980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.458146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.458205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.458387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.458437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.458631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.458670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.458835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.458875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.459042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.459100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.459280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.459322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.459536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.459576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.459709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.459761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.459892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.459932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.460093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.460133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.460271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.460310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.460453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.460493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.460642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.460682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.460834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.460874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.461077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.461114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.461239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.461273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.461492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.461533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.461700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.384 [2024-11-15 12:48:11.461743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.384 qpair failed and we were unable to recover it. 00:26:31.384 [2024-11-15 12:48:11.461866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.461899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.462052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.462092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.462221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.462260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.462431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.462470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.462621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.462661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.462840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.462880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.463051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.463091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.463213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.463252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.463384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.463423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.463605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.463662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.463844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.463884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.464016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.464055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.464236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.464269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.464410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.464442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.464635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.464686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.464908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.464941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.465092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.465125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.465284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.465326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.465505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.465547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.465713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.465766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.465900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.465942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.466141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.466181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.466342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.466381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.466527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.466576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.466765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.466808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.466947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.466979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.467121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.467153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.467298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.467340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.467452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.467494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.385 qpair failed and we were unable to recover it. 00:26:31.385 [2024-11-15 12:48:11.467627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.385 [2024-11-15 12:48:11.467668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.467846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.467889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.468005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.468046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.468222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.468263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.468385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.468426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.468581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.468622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.468819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.468860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.469022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.469061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.469263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.469306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.469495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.469545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.469744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.469777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.469889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.469921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.470082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.470121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.470293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.470332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.470523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.470562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.470750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.470784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.470922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.470956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.471171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.471221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.471357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.471390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.471519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.471560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.471760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.471803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.471951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.472001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.472147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.472189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.472330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.472371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.472539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.472581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.472751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.472793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.472974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.473007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.473173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.473206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.473325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.473364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.473510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.473548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.473701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.473747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.473935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.473969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.474114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.474146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.474304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.474346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.386 [2024-11-15 12:48:11.474516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.386 [2024-11-15 12:48:11.474549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.386 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.474684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.474765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.474926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.474974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.475166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.475213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.475401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.475446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.475631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.475684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.475864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.475924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.476138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.476179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.476328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.476363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.476492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.476528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.476647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.476681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.476798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.476834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.476973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.477006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.477120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.477153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.477293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.477346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.477526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.477558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.477665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.477698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.477847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.477879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.478076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.478133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.478310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.478365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.478549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.478608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.478804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.478845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.479058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.479100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.479249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.479293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.479508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.479550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.479752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.479787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.479900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.479932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.480093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.480132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.480314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.480356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.480500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.480541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.480752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.480796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.480934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.480977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.481183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.481216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.481321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.481355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.481463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.387 [2024-11-15 12:48:11.481496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.387 qpair failed and we were unable to recover it. 00:26:31.387 [2024-11-15 12:48:11.481671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.481711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.481890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.481954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.482165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.482206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.482381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.482422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.482613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.482653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.482832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.482875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.483057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.483099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.483307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.483349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.483481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.483523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.483643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.483703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.483908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.483948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.484115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.484155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.484316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.484360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.484488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.484532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.484731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.484766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.484903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.484936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.485070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.485110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.485252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.485291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.485472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.485515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.485683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.485736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.485897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.485962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.486119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.486165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.486342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.486387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.486571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.486629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.486749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.486783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.486920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.486953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.487118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.487160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.487289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.487331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.487510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.487552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.487728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.487771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.487911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.487977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.488118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.488171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.488313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.488346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.488523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.488562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.488763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.488809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.388 [2024-11-15 12:48:11.489036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.388 [2024-11-15 12:48:11.489069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.388 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.489199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.489232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.489392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.489433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.489638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.489679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.489866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.489908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.490078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.490122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.490248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.490290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.490477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.490517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.490680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.490755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.490898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.490954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.491088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.491128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.491324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.491363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.491520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.491568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.491757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.491791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.491914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.491947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.492121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.492160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.492295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.492335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.492550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.492591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.492746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.492789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.492978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.493020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.493149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.493190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.493370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.493410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.493622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.493664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.493886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.493929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.494069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.494110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.494283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.494325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.494499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.494541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.494744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.494778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.494917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.494950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.495069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.495104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.495248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.495290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.495420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.495462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.495628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.495670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.495852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.495892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.496030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.496070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.496222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.496262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.389 [2024-11-15 12:48:11.496396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.389 [2024-11-15 12:48:11.496435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.389 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.496597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.496636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.496815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.496858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.497012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.497061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.497193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.497249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.497375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.497414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.497539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.497579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.497710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.497762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.497875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.497914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.498086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.498142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.498309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.498365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.498552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.498586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.498736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.498770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.498938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.498980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.499155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.499198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.499364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.499396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.499496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.499528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.499676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.499709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.499857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.499907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.500061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.500094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.500275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.500326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.500423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.500456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.500623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.500663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.500842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.500884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.501019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.501063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.501252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.501297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.501443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.501483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.501632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.501672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.501832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.501873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.502032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.502073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.502313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.390 [2024-11-15 12:48:11.502359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.390 qpair failed and we were unable to recover it. 00:26:31.390 [2024-11-15 12:48:11.502490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.502530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.502674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.502714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.502889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.502936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.503160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.503211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.503355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.503388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.503548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.503587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.503753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.503794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.503934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.503974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.504185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.504237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.504482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.504533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.504768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.504813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.504964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.505010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.505185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.505245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.505382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.505437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.505562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.505601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.505786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.505828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.505955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.505994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.506162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.506202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.506409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.506453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.506638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.506682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.506840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.506909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.507124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.507176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.507445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.507496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.507747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.507788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.508005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.508042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.508174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.508212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.508394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.508434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.508614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.508659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.508855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.508901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.509051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.509096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.509239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.509279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.509431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.509471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.509672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.509705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.391 [2024-11-15 12:48:11.509850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.391 [2024-11-15 12:48:11.509882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.391 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.510017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.510055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.510211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.510252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.510400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.510439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.510657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.510707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.510926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.510971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.511186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.511230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.511415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.511474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.511713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.511773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.511913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.511957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.512177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.512217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.512405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.512449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.512660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.512704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.512873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.512919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.513153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.513193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.513367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.513400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.513541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.513574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.513782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.513828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.514007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.514040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.514184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.514217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.514362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.514395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.514550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.514595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.514804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.514844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.515061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.515105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.515298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.515338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.515531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.515590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.515785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.515826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.515988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.516028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.516166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.516210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.516365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.516421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.516542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.516582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.392 qpair failed and we were unable to recover it. 00:26:31.392 [2024-11-15 12:48:11.516786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.392 [2024-11-15 12:48:11.516820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.516954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.516987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.517079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.517112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.517220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.517259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.517408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.517448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.517598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.517642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.517837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.517882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.518094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.518127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.518297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.518353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.518522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.518566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.518755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.518789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.518922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.518955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.519126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.519181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.519317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.519377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.519616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.519650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.519782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.519816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.519957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.519989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.520146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.520179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.520308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.520341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.520472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.520504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.520676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.520713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.520963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.521008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.521196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.521240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.521451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.521484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.521621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.521653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.521842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.521883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.522097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.522142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.522326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.522370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.522515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.522581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.522775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.522838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.523008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.523081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.523273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.523330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.523543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.523594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.393 qpair failed and we were unable to recover it. 00:26:31.393 [2024-11-15 12:48:11.523847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.393 [2024-11-15 12:48:11.523898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.524168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.524219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.524424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.524457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.524563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.524595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.524736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.524770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.524945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.524990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.525215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.525248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.525374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.525407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.525516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.525549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.525727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.525772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.525941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.525981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.526201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.526235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.526401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.526433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.526578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.526627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.526847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.526881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.527053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.527086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.527222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.527261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.527460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.527492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.527628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.527661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.527881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.527913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.528092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.528144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.528377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.528417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.528579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.528617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.528802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.528841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.529066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.529106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.529277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.529336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.529575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.529615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.529776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.529818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.529943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.529984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.530210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.530255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.530397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.530441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.530579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.530623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.530792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.530838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.530969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.394 [2024-11-15 12:48:11.531028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.394 qpair failed and we were unable to recover it. 00:26:31.394 [2024-11-15 12:48:11.531232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.531272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.531434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.531485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.531648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.531681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.531864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.531905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.532069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.532132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.532359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.532406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.532602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.532646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.532892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.532925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.533078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.533127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.533338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.533382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.533595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.533629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.533800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.533855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.533983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.534022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.534184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.534226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.534437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.534470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.534605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.534638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.534745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.534779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.534947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.534997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.535238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.535286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.535458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.535506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.535741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.535789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.536065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.536112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.536313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.536347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.536480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.536513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.536688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.536754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.536906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.536953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.537144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.537192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.395 qpair failed and we were unable to recover it. 00:26:31.395 [2024-11-15 12:48:11.537364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.395 [2024-11-15 12:48:11.537426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.537602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.537635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.537777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.537811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.538010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.538048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.538197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.538240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.538430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.538470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.538662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.538709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.538925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.538988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.539201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.539234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.539346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.539381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.539550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.539602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.539811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.539850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.540040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.540099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.540326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.540374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.540569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.540601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.540713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.540753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.540921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.540969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.541153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.541200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.541370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.541418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.541604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.541650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.541871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.541921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.542111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.542159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.542341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.542374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.542484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.542517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.542662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.542709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.542948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.542995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.543180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.543227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.543451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.543498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.543695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.543736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.543905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.543957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.544107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.544146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.544331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.544395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.544595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.544642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.544851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.544900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.545132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.545165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.545303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.545336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.396 [2024-11-15 12:48:11.545506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.396 [2024-11-15 12:48:11.545556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.396 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.545776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.545809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.545939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.545972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.546116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.546164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.546344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.546391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.546584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.546640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.546778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.546812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.546992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.547040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.547225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.547272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.547436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.547482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.547670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.547704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.547846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.547880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.548061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.548109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.548248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.548295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.548470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.548517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.548686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.548744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.548930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.548978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.549169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.549216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.549439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.549486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.549677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.549740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.549954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.550001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.550162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.550210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.550398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.550452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.550640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.550687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.550895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.550942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.551183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.551216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.551353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.551385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.551520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.551553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.551787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.551837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.552027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.552073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.552276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.552324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.552522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.552555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.552733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.552786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.552947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.552994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.397 qpair failed and we were unable to recover it. 00:26:31.397 [2024-11-15 12:48:11.553198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.397 [2024-11-15 12:48:11.553236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.553390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.553428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.553595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.553628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.553801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.553834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.554027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.554075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.554263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.554311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.554537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.554584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.554748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.554797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.554999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.555032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.555174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.555207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.555399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.555445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.555606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.555656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.555836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.555883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.556119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.556166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.556352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.556401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.556554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.556603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.556837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.556886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.557109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.557141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.557307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.557363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.557538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.557584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.557821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.557855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.558108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.558164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.558350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.558398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.558548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.558595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.558789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.558837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.559004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.559052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.559210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.559257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.559443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.559492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.559679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.559741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.559946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.559995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.560218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.560265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.560550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.560598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.560776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.560824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.561040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.561072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.561241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.561300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.398 qpair failed and we were unable to recover it. 00:26:31.398 [2024-11-15 12:48:11.561467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.398 [2024-11-15 12:48:11.561514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.561707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.561765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.561952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.561984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.562092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.562125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.562313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.562360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.562532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.562579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.562746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.562795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.562974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.563022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.563214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.563261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.563420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.563467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.563688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.563746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.563928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.563975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.564130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.564177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.564402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.564450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.564685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.564743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.564915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.564980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.565202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.565250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.565480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.565545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.565708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.565770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.565940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.566005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.566249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.566314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.566501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.566560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.566750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.566784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.566926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.566959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.567208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.567240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.567375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.567407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.567574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.567607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.567839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.567872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.568014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.568047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.568261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.568326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.568480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.568527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.568773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.568838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.399 [2024-11-15 12:48:11.569058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.399 [2024-11-15 12:48:11.569122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.399 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.569344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.569391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.569616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.569664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.569857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.569923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.570143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.570208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.570397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.570444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.570627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.570675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.570934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.570968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.571109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.571163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.571407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.571471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.571695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.571756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.571965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.572030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.572243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.572307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.572531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.572578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.572791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.572857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.573106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.573170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.573417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.573489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.573682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.573740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.573960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.574024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.574288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.574321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.574485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.574518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.574663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.574712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.574952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.575017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.575247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.575310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.575503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.575550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.575746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.575803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.576025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.576089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.576298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.576361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.576557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.576603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.576809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.576876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.400 [2024-11-15 12:48:11.577094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.400 [2024-11-15 12:48:11.577158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.400 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.577384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.577432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.577666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.577714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.577976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.578040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.578215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.578273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.578428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.578475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.578688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.578756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.578947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.579013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.579258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.579322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.579515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.579562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.579749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.579798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.579996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.580062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.580261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.580325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.580524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.580571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.580778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.580845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.581065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.581113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.581287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.581353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.581538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.581584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.581829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.581896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.582128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.582191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.582385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.582432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.582621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.582668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.582842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.582888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.583110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.583148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.583301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.583339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.583557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.583604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.583830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.583895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.584106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.584171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.584398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.584445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.584610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.584658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.584920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.584986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.585179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.585242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.585433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.585482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.585707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.585769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.586011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.401 [2024-11-15 12:48:11.586083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.401 qpair failed and we were unable to recover it. 00:26:31.401 [2024-11-15 12:48:11.586301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.586350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.586546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.586592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.586755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.586803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.587052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.587117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.587373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.587437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.587625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.587672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.587949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.587987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.588154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.588192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.588399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.588463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.588639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.588686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.588957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.589022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.589252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.589321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.589518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.589564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.589757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.589805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.589979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.590048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.590249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.590316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.590539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.590587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.590797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.590865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.591095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.591142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.591312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.591366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.591568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.591616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.591813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.591879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.592040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.592109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.592289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.592337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.592524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.592573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.592777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.592845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.593098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.593163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.593348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.593395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.593555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.593603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.593852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.593921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.594105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.594170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.594402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.594450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.594611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.594658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.594906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.594972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.402 qpair failed and we were unable to recover it. 00:26:31.402 [2024-11-15 12:48:11.595198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.402 [2024-11-15 12:48:11.595263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.595489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.595536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.595791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.595858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.596069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.596135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.596337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.596384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.596570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.596617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.596833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.596900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.597100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.597165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.597356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.597403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.597628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.597675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.597891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.597956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.598210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.598275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.598463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.598519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.598734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.598782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.599035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.599100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.599363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.599426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.599614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.599660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.599890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.599956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.600226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.600290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.600467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.600533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.600716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.600798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.601059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.601096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.601222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.601259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.601405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.601443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.601595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.601632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.601837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.601885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.602094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.602142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.602329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.602375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.602531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.602578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.602745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.602795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.602989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.603035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.603216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.403 [2024-11-15 12:48:11.603254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.403 qpair failed and we were unable to recover it. 00:26:31.403 [2024-11-15 12:48:11.603435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.603472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.603680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.603730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.603887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.603924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.604168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.604215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.604414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.604461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.604649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.604696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.604903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.604967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.605192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.605246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.605431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.605503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.605692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.605801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.605985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.606051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.606247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.606312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.606538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.606584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.606772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.606822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.607021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.607069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.607257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.607304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.607451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.607497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.607666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.607713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.607986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.608052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.608280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.608344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.608587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.608625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.608776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.608816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.609022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.609088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.609318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.609384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.609620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.609666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.609909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.609978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.610227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.610293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.610524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.610572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.610765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.610814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.611036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.611101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.611234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.611280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.611469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.611517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.611661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.611708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.611936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.404 [2024-11-15 12:48:11.612001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.404 qpair failed and we were unable to recover it. 00:26:31.404 [2024-11-15 12:48:11.612257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.612322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.612556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.612604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.612857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.612924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.613155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.613221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.613416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.613482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.613691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.613749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.613973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.614037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.614266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.614333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.614536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.614583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.614794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.614863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.615116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.615182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.615383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.615450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.615614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.615661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.615926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.615991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.616263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.616329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.616562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.616610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.616778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.616851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.617008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.617074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.617350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.617417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.617641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.617688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.617937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.618008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.618212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.618282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.618571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.618618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.618812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.618881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.619119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.619187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.619447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.619513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.619744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.619795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.620060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.620099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.620273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.620311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.620550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.620598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.620796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.620865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.621094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.621170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.621378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.621444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.621624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.405 [2024-11-15 12:48:11.621672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.405 qpair failed and we were unable to recover it. 00:26:31.405 [2024-11-15 12:48:11.621894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.621960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.622182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.622230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.622486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.622552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.622714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.622784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.622989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.623027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.623187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.623225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.623357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.623395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.623605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.623660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.623870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.623918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.624116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.624164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.624315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.624362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.624508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.624555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.624743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.624793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.624947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.624996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.625185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.625223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.625369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.625408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.625623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.625669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.625868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.625917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.626098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.626146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.626320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.626367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.626524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.626573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.626759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.626799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.626946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.626984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.627160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.627226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.627431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.627478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.627701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.627759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.628015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.628081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.628336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.628403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.406 [2024-11-15 12:48:11.628628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.406 [2024-11-15 12:48:11.628676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.406 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.629010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.629111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.629476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.629558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.629803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.629865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.630136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.630203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.630433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.630503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.630797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.630869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.631078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.631147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.631362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.631428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.631587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.631634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.631823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.631871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.632098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.632163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.632330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.632399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.632552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.632599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.632758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.632806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.632989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.633037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.633222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.633270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.633413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.633460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.633647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.633694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.633913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.633968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.634185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.634236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.634424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.634481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.634683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.634749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.634942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.635008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.635264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.635329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.635503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.635551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.635744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.635795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.635990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.636038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.636229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.636277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.636470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.636517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.636704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.636781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.637008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.637056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.637259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.637324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.637536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.637609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.637832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.637900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.638113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.638179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.407 [2024-11-15 12:48:11.638387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.407 [2024-11-15 12:48:11.638454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.407 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.638650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.638697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.638929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.638995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.639203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.639270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.639479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.639517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.639641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.639679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.639906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.639972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.640237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.640275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.640460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.640498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.640611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.640648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.640911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.640979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.641222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.641294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.641493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.641540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.641742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.641790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.642028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.642095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.642277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.642343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.642533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.642580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.642764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.642813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.643007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.643074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.643283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.643350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.643541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.643589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.643792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.643831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.644014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.644052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.644284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.644358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.644512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.644558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.644752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.644801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.645015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.645082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.645283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.645329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.645519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.645567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.645766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.645830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.646090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.646158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.646378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.646425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.646611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.646658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.646861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.646900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.647009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.647047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.647200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.408 [2024-11-15 12:48:11.647239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.408 qpair failed and we were unable to recover it. 00:26:31.408 [2024-11-15 12:48:11.647427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.647474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.647699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.647764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.647962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.648027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.648264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.648336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.648506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.648554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.648775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.648823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.649017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.649064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.649301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.649369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.649561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.649607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.649867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.649936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.650143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.650214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.650448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.650485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.650618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.650656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.650844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.650914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.651130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.651197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.651448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.651512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.651668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.651714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.651940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.652007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.652217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.652284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.652441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.652487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.652679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.652739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.652953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.653018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.653272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.653339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.653557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.653605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.653791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.653866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.654066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.654139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.654419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.654487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.654713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.654771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.655037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.655103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.655360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.655435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.655632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.655680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.655877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.655942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.656157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.656224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.656427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.656493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.656662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.656709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.656932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.656998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.657228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.657275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.657498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.657567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.657736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.657785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.657933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.657979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.658174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.409 [2024-11-15 12:48:11.658222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.409 qpair failed and we were unable to recover it. 00:26:31.409 [2024-11-15 12:48:11.658442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.658490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.658664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.658711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.658942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.659009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.659263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.659329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.659515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.659563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.659746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.659796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.660019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.660086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.660261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.660329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.660510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.660557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.660750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.660799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.661058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.661125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.661342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.661407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.661596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.661645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.661836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.661905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.662144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.662181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.662314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.662359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.662592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.662640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.662914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.662981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.663168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.663237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.663398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.663445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.663632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.663679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.663860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.663931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.664202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.664268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.664454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.664501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.664675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.664734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.664931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.664979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.665165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.665214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.665391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.665438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.665629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.665676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.665830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.665878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.666045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.666092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.410 [2024-11-15 12:48:11.666304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.410 [2024-11-15 12:48:11.666351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.410 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.666544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.666591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.666809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.666879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.667088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.667159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.667386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.667434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.667628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.667675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.667850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.667920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.668201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.668275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.668459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.668507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.668702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.668777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.668917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.668965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.669196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.669272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.669479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.669550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.669706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.669774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.669949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.669989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.670150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.670218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.670408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.670455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.670643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.670691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.670873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.670946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.671143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.671210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.671375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.671422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.671617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.671664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.671892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.671960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.672177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.672242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.672399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.672446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.672643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.672691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.672902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.672968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.673187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.673235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.673405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.673451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.673635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.673682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.673956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.674040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.674315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.674389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.674690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.674820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.675051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.675101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.675300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.675374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.675532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.675579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.675765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.675814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.411 qpair failed and we were unable to recover it. 00:26:31.411 [2024-11-15 12:48:11.676046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.411 [2024-11-15 12:48:11.676111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.676289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.676355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.676563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.676610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.676831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.676899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.677161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.677199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.677321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.677360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.677544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.677591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.677756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.677806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.678024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.678091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.678315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.678381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.678575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.678623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.678818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.678890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.679117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.679181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.679391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.679461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.679626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.679674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.679969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.680059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.680326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.680397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.680565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.680605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.680784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.680828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.680984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.681025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.681231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.681279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.681550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.681614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.681891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.681942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.682201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.682264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.682476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.682515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.682701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.682758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.682875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.682915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.683058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.683101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.683241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.683288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.683508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.683569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.683774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.683830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.684038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.684100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.684312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.684351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.684548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.412 [2024-11-15 12:48:11.684589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.412 qpair failed and we were unable to recover it. 00:26:31.412 [2024-11-15 12:48:11.684735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.684776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.684899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.684947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.685094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.685152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.685284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.685324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.685482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.685521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.685783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.685832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.685980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.686018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.686142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.686180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.686302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.686340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.686478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.686516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.686635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.686673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.686847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.686886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.687052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.687090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.687247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.687285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.687410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.687448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.687610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.687649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.687795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.687834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.687949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.687989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.688124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.688162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.688311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.690 [2024-11-15 12:48:11.688349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.690 qpair failed and we were unable to recover it. 00:26:31.690 [2024-11-15 12:48:11.688461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.688499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.688644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.688689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.688820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.688858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.688987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.689027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.689221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.689259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.689382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.689419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.689538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.689576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.689750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.689789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.689973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.690011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.690164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.690202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.690356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.690418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.690650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.690698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.690913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.690979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.691239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.691276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.691434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.691472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.691602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.691643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.691799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.691838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.692011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.692058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.692250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.692298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.692479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.692526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.692705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.692772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.693013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.693061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.693245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.693292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.693429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.693477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.693675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.693736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.693937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.693984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.694169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.694207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.694333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.694371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.694522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.694560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.694750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.694800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.694941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.694990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.695217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.695283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.695451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.695499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.695657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.695704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.695944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.696010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.696184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.696251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.696387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.696435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.696603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.696650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.696893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.696960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.697132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.697199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.697438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.697486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.697652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.697700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.697969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.698052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.698282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.698355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.698587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.698670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.698934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.699003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.699255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.699323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.699509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.699558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.699730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.699780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.699990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.700057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.691 [2024-11-15 12:48:11.700247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.691 [2024-11-15 12:48:11.700316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.691 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.700456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.700504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.700679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.700737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.700897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.700971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.701155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.701203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.701427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.701474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.701642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.701689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.701903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.701950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.702141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.702191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.702330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.702378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.702541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.702587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.702774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.702824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.703058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.703105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.703331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.703378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.703603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.703650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.703869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.703936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.704166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.704213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.704475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.704540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.704745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.704794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.705031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.705074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.705231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.705270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.705479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.705526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.705736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.705798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.706066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.706130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.706343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.706407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.706599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.706646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.706873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.706939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.707155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.707223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.707425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.707472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.707621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.707668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.707901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.707967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.708137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.708203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.708401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.708449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.708622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.708672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.708889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.708956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.709185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.709232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.709384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.709434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.709629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.709676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.709875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.709923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.710143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.710189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.710379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.710426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.710618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.710665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.710855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.710902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.711134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.711182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.711345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.711397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.711625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.711673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.711881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.711937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.712136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.712185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.712382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.712431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.712649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.712697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.712949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.713025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.713294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.713362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.713564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.713612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.713836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.713886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.714057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.714144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.714355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.714424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.714628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.714675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.714865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.714931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.715221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.715289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.692 [2024-11-15 12:48:11.715462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.692 [2024-11-15 12:48:11.715509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.692 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.715710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.715767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.715964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.716012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.716238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.716285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.716451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.716516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.716712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.716768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.716919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.716966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.717178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.717225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.717375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.717422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.717574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.717621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.717867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.717934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.718112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.718177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.718385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.718451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.718672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.718731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.718959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.719015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.719278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.719346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.719536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.719586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.719842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.719910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.720111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.720176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.720424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.720489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.720678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.720735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.720928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.720975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.721118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.721166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.721374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.721439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.721596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.721643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.721848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.721874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.721963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.721988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.722122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.722147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.722237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.722262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.722380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.722406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.722528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.722552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.722663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.722688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.722895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.722944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.723109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.723157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.723306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.723353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.723507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.723553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.723693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.723783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.723872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.723897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.723980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.724004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.724177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.724224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.724417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.724464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.724619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.724666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.724828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.724854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.724971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.724997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.725217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.725265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.725434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.725481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.725648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.725696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.725835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.725860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.726025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.726071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.726258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.726306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.726529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.726576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.726745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.726799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.726919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.726944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.727058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.727084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.727228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.727274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.727433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.727482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.727677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.727744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.727848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.727873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.727954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.727979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.728060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.728085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.728224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.728271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.728461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.728510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.728694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.728765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.693 [2024-11-15 12:48:11.728875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.693 [2024-11-15 12:48:11.728900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.693 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.729007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.729032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.729108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.729134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.729276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.729323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.729463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.729509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.729665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.729713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.729838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.729864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.729958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.729983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.730069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.730094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.730210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.730235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.730344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.730394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.730586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.730633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.730803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.730829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.730945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.730970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.731088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.731113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.731276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.731323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.731482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.731531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.731742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.731789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.731907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.731933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.732050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.732079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.732167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.732193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.732334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.732360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.732446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.732472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.732559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.732585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.732745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.732794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.732879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.732904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.733024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.733049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.733175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.733223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.733407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.733453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.733600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.733647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.733850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.733876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.733990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.734016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.734095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.734121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.734318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.734365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.734549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.734597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.734776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.734802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.734912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.734938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.735025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.735050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.735163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.735188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.735330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.735355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.735499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.735524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.735607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.735633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.735823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.735850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.735956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.735982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.736098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.736123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.736332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.736379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.736542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.736571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.736768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.736813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.736949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.736973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.737166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.737214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.737375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.737422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.737603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.737644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.737811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.737837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.737947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.737972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.738106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.738153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.738339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.738388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.738579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.738626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.738841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.738867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.738981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.739007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.739099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.739124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.739216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.739242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.739381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.739407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.739520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.739576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.739779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.739805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.739914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.739939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.740015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.740041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.740228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.740276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.740501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.740547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.740746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.740795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.740910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.740935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.741095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.741142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.741331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.741382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.741572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.694 [2024-11-15 12:48:11.741621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.694 qpair failed and we were unable to recover it. 00:26:31.694 [2024-11-15 12:48:11.741791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.741817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.741939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.741965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.742078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.742103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.742212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.742237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.742408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.742455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.742645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.742694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.742868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.742893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.742985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.743010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.743120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.743145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.743262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.743320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.743473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.743521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.743668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.743716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.743897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.743945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.744149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.744196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.744385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.744433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.744660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.744708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.744873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.744920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.745056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.745104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.745291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.745339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.745523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.745570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.745774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.745823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.745988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.746036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.746225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.746273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.746406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.746453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.746599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.746647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.746836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.746884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.747067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.747115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.747278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.747325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.747537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.747584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.747782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.747831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.748020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.748067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.748210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.748257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.748444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.748491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.748727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.748776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.748964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.749013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.749217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.749265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.749490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.749538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.749692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.749775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.749955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.750004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.750212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.750260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.750484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.750531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.750685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.750756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.750984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.751031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.751205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.751252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.751406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.751453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.751640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.751687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.751908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.751955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.752147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.752172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.752289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.752314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.752399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.752456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.752661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.752709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.752926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.752973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.753164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.753212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.753438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.753486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.753633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.753679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.753929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.753976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.754202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.754249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.754448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.754495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.754659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.754706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.754920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.754967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.755189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.755236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.755459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.755506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.755701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.755764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.755942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.755989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.756174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.756222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.756466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.756516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.756670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.756731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.756907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.756954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.757107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.757163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.757392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.757439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.757670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.757739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.695 qpair failed and we were unable to recover it. 00:26:31.695 [2024-11-15 12:48:11.757928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.695 [2024-11-15 12:48:11.757976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.758145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.758191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.758358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.758424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.758634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.758683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.758898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.758946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.759104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.759152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.759339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.759389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.759612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.759659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.759866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.759914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.760086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.760133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.760306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.760353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.760563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.760611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.760893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.760962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.761187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.761259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.761480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.761528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.761667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.761713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.761913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.761977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.762230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.762297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.762477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.762524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.762729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.762777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.762981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.763055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.763269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.763332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.763491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.763538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.763771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.763819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.764036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.764107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.764304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.764369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.764562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.764612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.764837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.764919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.765174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.765202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.765299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.765325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.765412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.765460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.765607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.765654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.765840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.765889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.766108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.766174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.766366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.766414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.766605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.766652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.766854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.766903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.767108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.767174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.767375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.767424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.767604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.767651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.767871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.767937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.768219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.768283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.768435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.768483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.768674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.768733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.768935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.768982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.769162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.769209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.769353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.769400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.769585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.769634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.769878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.769928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.770111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.770157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.770345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.770393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.770597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.770645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.770894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.770942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.771142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.771189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.771433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.771503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.771700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.771765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.771969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.772033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.772254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.772319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.772537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.772584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.772745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.772793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.773001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.773066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.773251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.773314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.773537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.773584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.773796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.773863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.774068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.774133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.774349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.774414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.696 qpair failed and we were unable to recover it. 00:26:31.696 [2024-11-15 12:48:11.774638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.696 [2024-11-15 12:48:11.774686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.774886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.774952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.775091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.775138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.775399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.775424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.775546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.775571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.775663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.775688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.775797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.775822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.775933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.775958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.776100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.776149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.776346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.776393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.776561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.776608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.776805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.776854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.777014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.777063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.777270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.777318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.777552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.777599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.777791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.777839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.778042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.778089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.778373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.778440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.778637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.778685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.778936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.779008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.779276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.779343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.779538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.779586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.779817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.779884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.780106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.780180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.780419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.780490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.780653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.780700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.780935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.781015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.781253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.781324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.781542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.781589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.781784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.781833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.782078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.782142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.782349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.782413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.782634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.782682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.782845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.782912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.783205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.783269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.783488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.783534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.783668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.783716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.783951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.784016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.784234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.784299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.784521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.784569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.784770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.784819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.784981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.785052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.785294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.785358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.785547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.785599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.785815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.785882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.786141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.786207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.786358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.786405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.786601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.786648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.786925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.786992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.787229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.787255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.787363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.787388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.787474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.787499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.787635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.787660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.787781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.787811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.787895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.787921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.788036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.788061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.788185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.788246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.788435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.788482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.788667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.788715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.788956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.789034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.789319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.789386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.789606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.789653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.789887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.789954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.790141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.790189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.790413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.790462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.790662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.790708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.790954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.791026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.791308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.791373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.791605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.791653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.791910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.791978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.792262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.792328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.792518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.792567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.792812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.792881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.697 qpair failed and we were unable to recover it. 00:26:31.697 [2024-11-15 12:48:11.793090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.697 [2024-11-15 12:48:11.793157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.793331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.793399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.793623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.793671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.793933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.794000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.794227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.794293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.794483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.794532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.794708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.794764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.794948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.794996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.795221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.795287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.795525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.795550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.795651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.795675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.795784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.795810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.795889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.795914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.796036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.796061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.796257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.796326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.796528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.796575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.796790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.796858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.797002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.797058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.797286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.797333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.797485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.797532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.797681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.797748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.797966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.798043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.798226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.798274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.798423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.798471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.798650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.798674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.798793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.798819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.798969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.798995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.799240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.799287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.799501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.799549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.799784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.799832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.800007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.800053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.800189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.800215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.800353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.800378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.800495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.800520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.800662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.800687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.800955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.801003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.801219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.801266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.801487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.801534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.801758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.801808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.802062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.802127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.802383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.802448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.802624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.802672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.802864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.802932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.803169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.803234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.803458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.803524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.803748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.803796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.804009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.804061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.804288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.804354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.804560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.804614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.804831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.804898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.805117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.805182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.805378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.805443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.805632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.805681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.805976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.806016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.806118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.806146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.806317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.806376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.806593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.806650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.806890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.806941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.807159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.807215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.807427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.807485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.807794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.807845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.808040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.808090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.808316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.808373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.808613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.808661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.808882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.808932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.809126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.809175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.809427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.698 [2024-11-15 12:48:11.809482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.698 qpair failed and we were unable to recover it. 00:26:31.698 [2024-11-15 12:48:11.809670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.809737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.809945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.809995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.810303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.810359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.810615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.810671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.810916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.810965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.811186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.811242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.811534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.811590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.811795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.811845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.812101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.812159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.812412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.812468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.812681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.812747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.812978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.813048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.813222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.813280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.813479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.813535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.813759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.813809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.814010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.814082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.814289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.814344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.814617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.814673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.814942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.814992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.815225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.815280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.815490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.815547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.815801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.815860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.816022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.816071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.816281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.816336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.816526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.816581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.816842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.816892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.817069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.817118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.817318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.817374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.817539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.817594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.817820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.817870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.818055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.818104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.818358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.818414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.818672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.818739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.818923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.818972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.819147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.819197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.819453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.819509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.819746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.819812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.820074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.820130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.820343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.820400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.820617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.820672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.820907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.820957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.821234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.821295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.821557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.821619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.821903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.821955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.822147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.822203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.822419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.822474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.822758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.822816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.823059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.823123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.823422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.823483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.823757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.823819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.824106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.824166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.824414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.824473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.824656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.824733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.824955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.825016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.825252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.825312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.825585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.825645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.825894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.825955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.826226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.826286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.826514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.826574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.826808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.826869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.827061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.827122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.827395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.827465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.827708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.827783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.828062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.828121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.828337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.828396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.828622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.828683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.828972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.829035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.829239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.829299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.829489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.829550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.829827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.829887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.830161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.830221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.830494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.699 [2024-11-15 12:48:11.830554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.699 qpair failed and we were unable to recover it. 00:26:31.699 [2024-11-15 12:48:11.830758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.830819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.831092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.831152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.831393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.831453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.831695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.831769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.832049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.832110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.832330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.832391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.832615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.832678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.832931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.832992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.833236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.833296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.833555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.833615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.833864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.833924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.834137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.834197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.834439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.834499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.834790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.834858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.835076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.835145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.835440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.835505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.835816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.835884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.836133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.836200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.836460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.836525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.836777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.836844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.837098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.837165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.837420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.837486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.837702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.837782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.838045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.838105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.838341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.838401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.838672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.838761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.838997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.839057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.839285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.839344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.839590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.839650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.839858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.839931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.840267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.840332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.840627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.840691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.840959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.841024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.841312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.841377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.841670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.841754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.842012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.842077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.842326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.842393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.842587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.842651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.842969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.843035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.843276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.843341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.843541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.843606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.843873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.843939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.844228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.844293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.844518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.844587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.844856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.844923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.845174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.845241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.845491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.845558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.845816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.845883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.846182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.846248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.846502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.846566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.846766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.846834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.847082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.847148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.847443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.847507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.847805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.847872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.848123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.848188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.848475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.848539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.848807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.848875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.849119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.849185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.849442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.849506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.849769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.849835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.850090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.850155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.850449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.850514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.850782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.850848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.851070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.851135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.851427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.851492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.851692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.851775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.851972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.852038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.852330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.852395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.700 qpair failed and we were unable to recover it. 00:26:31.700 [2024-11-15 12:48:11.852647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.700 [2024-11-15 12:48:11.852713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.852991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.853066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.853272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.853337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.853549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.853618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.853877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.853944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.854197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.854262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.854543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.854607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.854851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.854917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.855209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.855274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.855477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.855543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.855839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.855906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.856159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.856227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.856465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.856530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.856781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.856848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.857149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.857214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.857525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.857589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.857883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.857949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.858152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.858218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.858457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.858521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.858816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.858882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.859189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.859253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.859495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.859560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.859781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.859847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.860140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.860205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.860455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.860519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.860787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.860853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.861061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.861127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.861331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.861396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.861685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.861765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.861971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.862038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.862326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.862391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.862617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.862681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.862917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.862982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.863229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.863295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.863500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.863564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.863793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.863861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.864122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.864188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.864380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.864445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.864760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.864826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.865081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.865145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.865412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.865477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.865686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.865778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.866076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.866141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.866440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.866505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.866800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.866866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.867157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.867220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.867511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.867576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.867847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.867914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.868169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.868233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.868527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.868592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.868856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.868922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.869165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.869230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.869524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.869589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.869838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.869907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.870165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.870229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.870491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.870557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.870775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.870844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.871056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.871122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.871341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.871405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.871611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.871679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.871925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.871992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.872277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.872342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.872640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.872704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.872970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.873038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.873296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.873361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.873560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.873624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.873887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.873953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.874211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.874277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.874538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.874603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.701 qpair failed and we were unable to recover it. 00:26:31.701 [2024-11-15 12:48:11.874882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.701 [2024-11-15 12:48:11.874948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.875199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.875265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.875468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.875532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.875751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.875818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.876037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.876105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.876325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.876389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.876686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.876770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.877032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.877097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.877365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.877429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.877627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.877692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.877939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.878005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.878215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.878279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.878517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.878592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.878879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.878945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.879196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.879262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.879517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.879583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.879783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.879850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.880148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.880213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.880416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.880480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.880746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.880812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.881018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.881082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.881332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.881397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.881634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.881699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.881947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.882013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.882264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.882329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.882580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.882645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.882957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.883033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.883323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.883387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.883642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.883707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.883984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.884049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.884319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.884383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.884640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.884705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.885012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.885077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.885287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.885351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.885588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.885652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.885886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.885953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.886196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.886262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.886514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.886579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.886830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.886896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.887159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.887228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.887481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.887546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.887821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.887887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.888099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.888165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.888408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.888475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.888688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.888778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.889021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.889086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.889328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.889394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.889653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.889732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.889952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.890020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.890311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.890376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.890671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.890754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.890978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.891043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.891235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.891310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.891560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.891625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.891900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.891967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.892184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.892249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.892437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.892502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.892686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.892771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.893030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.893094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.893320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.893384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.893584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.893649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.893872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.893938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.894200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.894266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.894507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.894573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.894830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.894899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.895142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.895206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.895422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.895488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.895748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.895815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.896014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.896079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.896321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.896386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.896615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.896680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.896982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.702 [2024-11-15 12:48:11.897047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.702 qpair failed and we were unable to recover it. 00:26:31.702 [2024-11-15 12:48:11.897332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.897398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.897614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.897680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.897928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.897993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.898212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.898276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.898541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.898607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.898835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.898902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.899190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.899255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.899527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.899593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.899797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.899864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.900113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.900178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.900398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.900463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.900688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.900769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.901008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.901074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.901347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.901412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.901661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.901739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.901975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.902042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.902319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.902383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.902595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.902660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.902984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.903052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.903263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.903328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.903530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.903598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.903843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.903910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.904131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.904196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.904452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.904517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.904790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.904862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.905127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.905193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.905440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.905517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.905773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.905843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.906063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.906130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.906373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.906442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.906700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.906789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.907009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.907075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.907316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.907385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.907630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.907695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.908038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.908118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.908400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.908466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.908707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.908798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.909097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.909167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.909456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.909522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.909819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.909890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.910142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.910209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.910412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.910477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.910796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.910865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.911112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.911180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.911485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.911554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.911800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.911868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.912126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.912205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.912483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.912563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.912829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.912897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.913173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.913240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.913528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.913594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.913843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.913922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.914216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.914281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.914525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.914602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.914886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.914956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.915218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.915298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.915566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.915634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.915907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.915974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.916204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.916273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.916495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.916562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.916768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.916838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.917125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.917194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.917450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.917516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.917746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.917829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.918077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.918143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.918440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.918518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.703 [2024-11-15 12:48:11.918757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.703 [2024-11-15 12:48:11.918828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.703 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.919043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.919107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.919313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.919391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.919636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.919701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.919969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.920036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.920341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.920409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.920658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.920746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.921006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.921074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.921335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.921403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.921630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.921696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.922002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.922071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.922291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.922356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.922601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.922686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.922949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.923017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.923260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.923325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.923593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.923662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.923971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.924067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.924318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.924389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.924602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.924668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.924956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.925026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.925326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.925399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.925690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.925787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.926064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.926131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.926422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.926487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.926803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.926874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.927130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.927195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.927492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.927570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.927885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.927953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.928164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.928248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.928479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.928546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.928799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.928869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.929166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.929236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.929493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.929559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.929776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.929860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.930112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.930183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.930415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.930482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.930746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.930816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.931072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.931138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.931402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.931481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.931780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.931851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.932147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.932212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.932440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.932509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.932758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.932826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.933077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.933144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.933391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.933458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.933762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.933830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.934069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.934137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.934350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.934415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.934691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.934793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.935090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.935158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.935410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.935491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.935810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.935878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.936127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.936196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.936438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.936505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.936761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.936830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.937102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.937186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.937451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.937517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.937763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.937837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.938062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.938143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.938416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.938483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.938691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.938783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.939078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.939157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.939448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.939528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.939791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.939873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.940090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.940159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.940393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.940463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.940762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.940831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.941092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.941172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.941442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.941510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.941699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.941791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.942051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.942120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.942379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.942444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.942636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.704 [2024-11-15 12:48:11.942732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.704 qpair failed and we were unable to recover it. 00:26:31.704 [2024-11-15 12:48:11.942978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.943047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.943338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.943403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.943688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.943802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.944067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.944133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.944436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.944507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.944748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.944816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.945106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.945171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.945487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.945554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.945782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.945850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.946152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.946222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.946510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.946577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.946798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.946865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.947142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.947211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.947456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.947522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.947763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.947842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.948172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.948240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.948493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.948575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.948797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.948864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.949071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.949139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.949393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.949473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.949746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.949824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.950095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.950167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.950425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.950491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.950715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.950819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.951066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.951135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.951383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.951448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.951700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.951806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.952063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.952127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.952350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.952428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.952704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.952799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.953053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.953118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.953319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.953403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.953668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.953756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.953971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.954041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.954319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.954388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.954601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.954669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.954969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.955044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.955297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.955365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.955614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.955698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.956012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.956081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.956309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.956375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.956641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.956710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.957009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.957076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.957342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.957421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.957691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.957781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.958015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.958083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.958327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.958396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.958604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.958669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.958930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.959013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.959224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.959292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.959543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.959607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.959883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.959953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.960202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.960268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.960510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.960578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.960914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.960982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.961253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.961326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.961643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.961710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.961971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.962043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.962344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.962411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.962709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.962799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.963017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.963087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.963337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.963405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.963698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.963810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.964082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.964149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.705 qpair failed and we were unable to recover it. 00:26:31.705 [2024-11-15 12:48:11.964455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.705 [2024-11-15 12:48:11.964535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.964811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.964880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.965119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.965184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.965396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.965464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.965703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.965803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.966105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.966174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.966391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.966457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.966712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.966795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.967021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.967090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.967391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.967460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.967756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.967830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.968061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.968129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.968391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.968476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.968699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.968794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.969042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.969111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.969399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.969468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.969692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.969781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.970044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.970112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.970425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.970492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.970785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.970867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.971084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.971152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.971411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.971495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.971788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.971857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.972078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.972144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.972401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.972471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.972715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.972816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.973070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.973146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.973408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.973474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.973778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.973853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.974149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.974215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.974463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.974530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.974853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.974924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.975177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.975242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.975527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.975595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.975852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.975921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.976213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.976282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.976559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.976625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.976912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.976980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.977244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.977312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.977568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.977633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.977894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.977976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.978237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.978302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.978519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.978603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.978923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.978992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.979251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.979336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.979657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.979746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.980016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.980082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.980349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.980418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.980677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.980767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.981080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.981153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.981434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.981499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.981754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.981822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.982055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.982124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.982324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.982390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.982640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.982708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.983043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.983109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.983409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.983490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.983754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.983823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.984129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.984209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.984444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.984514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.984777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.984846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.985096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.985175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.985446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.985512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.985760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.985829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.986055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.986124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.986383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.986448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.986751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.986822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.987120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.987185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.987402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.987473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.987755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.987826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.988079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.988147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.988409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.988492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.706 qpair failed and we were unable to recover it. 00:26:31.706 [2024-11-15 12:48:11.988714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.706 [2024-11-15 12:48:11.988812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.989075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.989156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.989402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.989470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.989747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.989816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.990079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.990148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.990371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.990436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.990695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.990800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.991125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.991195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.991405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.991472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.991785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.991856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.992125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.992191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.992441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.992520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.992809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.992889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.993139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.993205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.993476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.993542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.993842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.993909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.994185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.994254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.994499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.994565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.994828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.994896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.995193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.995260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.995557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.995622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.995873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.995958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.996225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.996289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.996595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.996675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.996974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.997041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.997294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.997358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.997624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.997693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.997934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.998002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.998254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.998319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.998616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.998685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.999005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.999071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.999362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.999430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:11.999744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:11.999812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.000067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.000137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.000398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.000464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.000715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.000821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.001095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.001164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.001457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.001523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.001748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.001818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.002042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.002107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.002363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.002435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.002761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.002832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.003054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.003119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.003409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.003485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.003708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.003792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.003991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.004056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.004288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.004357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.004572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.004640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.004914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.004981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.005246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.005315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.005512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.005581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.005828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.005901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.006221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.006299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.006548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.006627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.006901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.006969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.007173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.007239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.007478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.007548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.007805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.007876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.008173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.008256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.008459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.008525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.008772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.008841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.009054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.009136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.009396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.009461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.009737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.009811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.010107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.010174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.010401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.010467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.010767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.010837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.011100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.011166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.011385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.011454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.011747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.011815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.012035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.012095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.012234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.012269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.012427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.707 [2024-11-15 12:48:12.012462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.707 qpair failed and we were unable to recover it. 00:26:31.707 [2024-11-15 12:48:12.012639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.708 [2024-11-15 12:48:12.012710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.708 qpair failed and we were unable to recover it. 00:26:31.708 [2024-11-15 12:48:12.012901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.708 [2024-11-15 12:48:12.012936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.708 qpair failed and we were unable to recover it. 00:26:31.708 [2024-11-15 12:48:12.013079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.708 [2024-11-15 12:48:12.013114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.708 qpair failed and we were unable to recover it. 00:26:31.708 [2024-11-15 12:48:12.013345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.708 [2024-11-15 12:48:12.013413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.708 qpair failed and we were unable to recover it. 00:26:31.708 [2024-11-15 12:48:12.013669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.708 [2024-11-15 12:48:12.013767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.708 qpair failed and we were unable to recover it. 00:26:31.708 [2024-11-15 12:48:12.013913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.708 [2024-11-15 12:48:12.013955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.708 qpair failed and we were unable to recover it. 00:26:31.708 [2024-11-15 12:48:12.014183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.708 [2024-11-15 12:48:12.014248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.708 qpair failed and we were unable to recover it. 00:26:31.708 [2024-11-15 12:48:12.014554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.708 [2024-11-15 12:48:12.014633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.708 qpair failed and we were unable to recover it. 00:26:31.708 [2024-11-15 12:48:12.014849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.708 [2024-11-15 12:48:12.014886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.708 qpair failed and we were unable to recover it. 00:26:31.708 [2024-11-15 12:48:12.014992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.708 [2024-11-15 12:48:12.015072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.708 qpair failed and we were unable to recover it. 00:26:31.708 [2024-11-15 12:48:12.015329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.708 [2024-11-15 12:48:12.015383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.708 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-15 12:48:12.015638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.015704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.015871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.015914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.016123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.016192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.016408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.016473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.016770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.016808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.016963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.016997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.017211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.017278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.017556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.017623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.017863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.017904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.018073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.018140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.018384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.018450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.018691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.018745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.018894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.018928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.019108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.019184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.019512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.019578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.019837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.019877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.020004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.020091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.020423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.020490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.020787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.020823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.020937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.020971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.021202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.021272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.021567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.021602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.021870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.021906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.022029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.022066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.022218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.022253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.022395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.022431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.022643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.022708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.022871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.022912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.023065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.023100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.023393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.023476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.023680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.023755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.023896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.023930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.024135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.024205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.024474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.024541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.024780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.024823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.024952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.024987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.025228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-15 12:48:12.025262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-15 12:48:12.025390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.025426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.025681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.025778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.025922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.025964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.026162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.026228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.026478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.026543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.026813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.026856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.026997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.027066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.027323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.027408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.027668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.027752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.027966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.028033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.028197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.028241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.028432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.028508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.028805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.028842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.028995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.029074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.029315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.029381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.029694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.029795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.029938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.029974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.030266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.030331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.030582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.030647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.030848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.030884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.031026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.031092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.031380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.031445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.031783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.031819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.031967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.032002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.032175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.032209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.032478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.032544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.032848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.032917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.033127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.033195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.033439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.033507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.033784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.033853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.034108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.034175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.034470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.034504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.034642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.034676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.034936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.035002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.035310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.035376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.035683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.035767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.036066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.036131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-15 12:48:12.036443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-15 12:48:12.036509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.036825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.036893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.037142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.037211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.037503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.037570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.037842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.037910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.038162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.038227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.038476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.038543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.038789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.038862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.039104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.039170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.039462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.039529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.039827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.039895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.040195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.040261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.040556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.040622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.040934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.041002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.041215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.041291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.041547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.041614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.041943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.042011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.042229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.042297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.042601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.042667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.043021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.043087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.043385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.043450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.043662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.043754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.044022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.044091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.044374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.044441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.044656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.044743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.044990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.045058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.045352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.045419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.045714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.045805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.046074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.046141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.046403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.046469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.046670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.046757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.047016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.047084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.047387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.047452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.047710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.047827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.048050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.048117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.048367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.048433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.048754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.048822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.049066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.049132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-15 12:48:12.049385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-15 12:48:12.049454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.049694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.049740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.049854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.049888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.050108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.050175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.050477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.050543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.050801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.050869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.051166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.051232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.051527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.051592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.051852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.051919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.052169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.052203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.052323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.052357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.052548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.052612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.052891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.052959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.053206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.053240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.053417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.053451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.053656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.053739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.054003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.054081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.054369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.054435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.054657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.054743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.055034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.055099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.055403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.055468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.055780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.055848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.056137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.056202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.056446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.056512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.056828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.056896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.057196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.057261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.057514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.057579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.057831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.057897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.058194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.058260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.058563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.058628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.058908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.058975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.059223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.059288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.059583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.059648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.059976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.060043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.060348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.060413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.060619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.060684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.060963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.061029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.061332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.061397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-15 12:48:12.061652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-15 12:48:12.061739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.062036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.062104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.062401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.062466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.062708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.062796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.063098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.063164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.063418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.063487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.063776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.063844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.064058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.064127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.064416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.064482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.064683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.064767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.065031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.065096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.065344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.065409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.065668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.065748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.066038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.066104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.066401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.066466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.066757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.066823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.067111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.067176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.067466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.067531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.067776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.067846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.068154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.068221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.068526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.068592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.068799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.068868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.069159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.069225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.069528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.069593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.069919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.069986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.070229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.070295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.070584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.070649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.070901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.070968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.071180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.071246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.071502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.071566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.071779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.071847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.072034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-15 12:48:12.072099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-15 12:48:12.072318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.072384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.072620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.072685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.072980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.073046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.073293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.073358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.073570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.073638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.073850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.073916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.074173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.074239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.074491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.074556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.074847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.074915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.075165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.075231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.075444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.075509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.075803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.075870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.076165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.076231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.076447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.076522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.076754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.076821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.077074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.077139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.077382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.077447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.077659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.077737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.078045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.078111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.078408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.078473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.078682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.078764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.079005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.079070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.079321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.079387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.079675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.079806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.080106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.080172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.080455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.080520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.080822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.080890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.081188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.081253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.081548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.081615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.081924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.081990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.082238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.082305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.082593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.082659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.082887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.082954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.083138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.083203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.083440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.083505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.083757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.083824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.084074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.084139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.084389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-15 12:48:12.084457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-15 12:48:12.084753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.084821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.085081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.085146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.085382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.085449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.085698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.085779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.086025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.086092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.086294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.086358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.086646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.086711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.086984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.087050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.087301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.087366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.087604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.087669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.087953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.088018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.088309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.088373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.088661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.088757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.089064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.089129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.089331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.089398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.089643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.089746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.090054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.090119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.090366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.090432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.090658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.090742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.090967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.091032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.091285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.091350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.091640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.091705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.092015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.092080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.092294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.092359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.092571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.092637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.092909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.092975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.093180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.093245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.093522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.093588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.093890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.093957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.094261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.094326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.094538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.094604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.094862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.094929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.095187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.095253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.095487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.095551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.095815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.095881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.096172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.096237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.096493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.096557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.096817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-15 12:48:12.096884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-15 12:48:12.097184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.097249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.097543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.097608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.097864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.097930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.098225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.098290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.098597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.098663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.098937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.099003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.099265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.099330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.099580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.099647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.099971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.100038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.100322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.100388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.100593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.100657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.100924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.100990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.101287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.101353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.101604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.101668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.101975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.102040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.102333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.102398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.102643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.102710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.103020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.103096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.103299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.103367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.103655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.103747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.104053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.104119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.104406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.104471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.104741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.104807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.105099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.105164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.105466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.105531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.105785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.105851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.106147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.106212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.106457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.106522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.106769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.106835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.107092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.107157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.107399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.107464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.107764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.107831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.108125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.108189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.108404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.108474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.108781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.108848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.109098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.109164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.109415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.109483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-15 12:48:12.109662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-15 12:48:12.109745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.110013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.110077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.110360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.110425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.110740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.110805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.111106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.111171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.111419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.111485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.111757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.111826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.112103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.112168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.112406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.112473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.112712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.112797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.113053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.113118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.113414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.113480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.113772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.113840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.114089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.114154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.114348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.114413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.114671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.114749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.114970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.115036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.115300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.115367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.115669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.115752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.116058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.116123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.116415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.116489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.116770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.116837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.117088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.117152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.117403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.117467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.117755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.117822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.118018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.118084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.118333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.118397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.118642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.118707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.118986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.119051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.119291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.119359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.119643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.119714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.119995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.120061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.120300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.120365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.120680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.120768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.121082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.121152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.121470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.121539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.121838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.121906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.122153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.122222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-15 12:48:12.122425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-15 12:48:12.122491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.122786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.122868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.123153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.123221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.123411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.123476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.123685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.123775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.124083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.124149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.124399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.124467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.124774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.124843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.125071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.125138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.125413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.125481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.125661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.125748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.125993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.126071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.126385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.126453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.126702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.126792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.127070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.127139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.127359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.127428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.127669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.127767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.128038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.128103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.128402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.128473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.128715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.128805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.129050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.129115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.129414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.129484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.129692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.129795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.130066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.130146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.130360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.130426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.130657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.130746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.131027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.131096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.131349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.131415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.131669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.131760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.131988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.132054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.132357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.132432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.132702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.132807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.133063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.133128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-15 12:48:12.133360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-15 12:48:12.133429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.133628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.133697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.133992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.134066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.134335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.134404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.134655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.134743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.135039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.135105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.135355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.135421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.135712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.135814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.136041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.136108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.136350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.136416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.136667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.136757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.137018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.137084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.137377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.137459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.137767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.137834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.138130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.138196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.138469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.138537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.138802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.138871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.139104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.139185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.139431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.139497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.139790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.139871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.140187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.140253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.140509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.140591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.140877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.140946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.141155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.141225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.141494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.141565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.141777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.141844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.142103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.142177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.142467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.142536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.142759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.142827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.143043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.143124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.143345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.143413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.143651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.143716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.144059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.144128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.144397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.144462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.144681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.144781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.145056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.145123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.145414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.145493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.145777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.145848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-15 12:48:12.146055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-15 12:48:12.146121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.146413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.146481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.146750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.146818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.147065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.147133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.147429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.147495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.147772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.147840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.148110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.148180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.148411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.148477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.148698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.148816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.149087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.149155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.149368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.149435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.149744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.149816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.150080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.150145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.150440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.150517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.150772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.150840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.151111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.151180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.151460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.151525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.151767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.151835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.152117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.152186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.152443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.152509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.152751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.152837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.153089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.153154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.153405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.153470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.153694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.153792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.154051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.154116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.154413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.154481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.154748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.154816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.155059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.155132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.155409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.155476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.155710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.155800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.156056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.156128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.156343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.156426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.156714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.156827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.157101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.157170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.157354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.157421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.157698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.157802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.158061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.158129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.158429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.158498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-15 12:48:12.158743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-15 12:48:12.158812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.159060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.159128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.159365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.159433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.159679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.159769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.160004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.160075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.160299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.160364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.160657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.160748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.161054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.161123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.161421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.161498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.161757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.161825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.162122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.162193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.162467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.162534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.162826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.162895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.163154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.163223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.163474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.163538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.163789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.163870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.164186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.164255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.164509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.164575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.164847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.164916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.165168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.165236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.165527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.165597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.165871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.165940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.166235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.166318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.166581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.166646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.166927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.166994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.167221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.167291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.167586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.167651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.167920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.167991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.168286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.168351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.168667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.168765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.169033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.169102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.169346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.169411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.169683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.169774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.170001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.170078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.170376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.170445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.170657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.170745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.171057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.171127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.171418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.171483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-15 12:48:12.171698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-15 12:48:12.171786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.172062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.172131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.172345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.172411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.172670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.172787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.173062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.173128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.173434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.173513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.173777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.173849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.174064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.174131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.174393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.174463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.174682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.174772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.175049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.175118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.175387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.175455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.175655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.175740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.175980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.176049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.176355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.176422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.176675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.176762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.177086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.177153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.177413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.177481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.177746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.177815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.178047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.178112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.178380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.178448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.178746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.178815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.179120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.179189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.179415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.179481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.179751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.179831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.180105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.180175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.180401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.180469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.180694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.180806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.181069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.181137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.181388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.181462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.181755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.181826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.182132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.182198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.182440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-15 12:48:12.182509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-15 12:48:12.182785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.182852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.183111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.183182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.183499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.183577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.183849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.183916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.184209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.184276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.184576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.184641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.184904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.184974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.185264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.185330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.185581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.185660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.185946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.186013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.186316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.186390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.186653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.186741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.187042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.187108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.187348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.187416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.187670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.187756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.187994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.188058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.188392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.188461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.188709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.188822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.189053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.189122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.189363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.189428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.189644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.189745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.190032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.190099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.190389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.190455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.190762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.190835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.191141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.191206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.191495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.191565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.191764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.191831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.192091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.192167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.192443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.192509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.192784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.192852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.193115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.193184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.193415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.193481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.193739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.193825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.194049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.194115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.194377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.194442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.194743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.194815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.195022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.195087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-15 12:48:12.195278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-15 12:48:12.195345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.195580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.195649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.195915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.195982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.196273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.196352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.196609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.196676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.197024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.197120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.197436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.197502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.197805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.197878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.198135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.198201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.198454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.198526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.198812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.198881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.199130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.199195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.199487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.199556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.199828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.199896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.200101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.200166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.200420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.200489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.200749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.200818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.201033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.201108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.201378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.201445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.201664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.201753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.202035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.202104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.202404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.202469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.202699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.202801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.203043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.203112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.203374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.203445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.203746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.203814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.204036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.204101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.204386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.204456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.204782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.204851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.205121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.205203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.205478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.205545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.205815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.205883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.206188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.206255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.206563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.206628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.206935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.207001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.207291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.207355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.207546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.207611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.996 qpair failed and we were unable to recover it. 00:26:31.996 [2024-11-15 12:48:12.207837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.996 [2024-11-15 12:48:12.207905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.208112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.208178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.208419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.208486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.208666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.208754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.209023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.209089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.209361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.209426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.209665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.209752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.209980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.210046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.210333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.210408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.210734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.210805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.211051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.211115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.211313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.211380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.211651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.211716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.212007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.212072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.212322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.212387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.212687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.212791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.213063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.213129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.213357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.213421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.213670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.213758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.214065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.214130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.214432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.214496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.214744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.214812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.215068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.215134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.215328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.215394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.215688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.215778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.216034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.216102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.216348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.216413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.216704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.216792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.217043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.217107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.217408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.217472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.217734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.217803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.218106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.218170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.218437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.218502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.218789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.218858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.219104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.219171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.219430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.219495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.219755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.219822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.220038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.220105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.997 qpair failed and we were unable to recover it. 00:26:31.997 [2024-11-15 12:48:12.220355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.997 [2024-11-15 12:48:12.220423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.220677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.220783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.221055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.221120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.221372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.221438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.221649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.221716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.222041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.222106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.222328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.222394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.222595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.222663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.222898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.222965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.223216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.223282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.223525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.223609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.223850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.223918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.224129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.224195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.224489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.224554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.224795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.224864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.225152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.225217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.225466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.225532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.225835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.225902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.226194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.226259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.226516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.226581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.226835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.226902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.227147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.227212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.227463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.227527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.227832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.227899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.228157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.228222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.228464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.228529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.228775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.228843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.229102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.229168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.229412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.229477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.229740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.229809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.230047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.230112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.230373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.230437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.230744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.230810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.231105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.231170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.231472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.231536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.231829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.231896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.232139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.232204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.232465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.998 [2024-11-15 12:48:12.232531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.998 qpair failed and we were unable to recover it. 00:26:31.998 [2024-11-15 12:48:12.232797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.232864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.233080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.233145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.233443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.233508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.233711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.233793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.234089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.234156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.234448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.234512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.234752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.234819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.235069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.235134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.235434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.235498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.235800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.235867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.236110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.236176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.236466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.236530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.236841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.236908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.237175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.237240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.237523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.237588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.237832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.237901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.238202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.238267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.238514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.238579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.238838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.238905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.239157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.239222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.239434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.239498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.239692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.239773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.240005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.240070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.240328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.240396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.240641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.240707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.241014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.241079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.241403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.241468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.241758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.241825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.242115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.242179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.242428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.242495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.242753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.242819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.243077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.243141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.243387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.999 [2024-11-15 12:48:12.243452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:31.999 qpair failed and we were unable to recover it. 00:26:31.999 [2024-11-15 12:48:12.243735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.243800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.244089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.244154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.244461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.244525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.244794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.244860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.245122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.245187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.245483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.245547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.245810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.245890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.246169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.246234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.246487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.246551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.246841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.246908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.247159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.247225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.247435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.247499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.247765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.247831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.248030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.248096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.248383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.248447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.248690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.248790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.249080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.249145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.249399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.249463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.249754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.249821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.250114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.250180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.250452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.250518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.250813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.250879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.251139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.251204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.251450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.251515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.251763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.251830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.252116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.252181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.252400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.252465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.252679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.252760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.253021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.253086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.253384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.253450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.253778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.253845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.254095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.254161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.254449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.254515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.254813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.254880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.255084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.255151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.255413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.255479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.255771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.255838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.000 [2024-11-15 12:48:12.256046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.000 [2024-11-15 12:48:12.256115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.000 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.256403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.256468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.256757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.256823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.257076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.257140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.257395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.257460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.257734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.257802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.258065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.258130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.258333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.258400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.258646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.258712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.258967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.259042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.259298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.259363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.259649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.259715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.259988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.260053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.260345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.260410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.260759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.260825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.261127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.261193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.261437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.261503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.261734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.261803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.262051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.262117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.262380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.262445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.262751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.262818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.263115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.263180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.263436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.263500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.263825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.263893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.264107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.264173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.264418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.264486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.264674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.264764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.265059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.265125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.265386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.265454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.265759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.265825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.266088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.266154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.266350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.266418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.266663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.266747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.266974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.267039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.267245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.267310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.267560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.267625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.267894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.267961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.268207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.268274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.268566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.001 [2024-11-15 12:48:12.268632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.001 qpair failed and we were unable to recover it. 00:26:32.001 [2024-11-15 12:48:12.268913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.268980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.269162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.269227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.269471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.269536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.269833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.269921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.270219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.270283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.270531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.270595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.270832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.270900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.271112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.271176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.271380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.271445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.271687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.271777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.272072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.272146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.272447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.272512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.272759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.272826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.273073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.273140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.273434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.273499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.273752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.273819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.274107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.274171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.274428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.274495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.274798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.274866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.275118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.275183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.275473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.275537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.275797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.275864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.276150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.276215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.276504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.276569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.276821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.276888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.277152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.277216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.277474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.277539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.277829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.277897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.278160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.278224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.278502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.278567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.278865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.278932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.279168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.279232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.279485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.279552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.279816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.279883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.280099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.280167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.280454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.280521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.002 [2024-11-15 12:48:12.280776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.002 [2024-11-15 12:48:12.280845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.002 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.281105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.281171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.281465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.281531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.281752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.281820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.282095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.282161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.282410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.282476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.282749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.282816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.283023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.283087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.283387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.283452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.283774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.283842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.284089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.284156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.284378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.284444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.284699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.284785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.285072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.285137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.285387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.285462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.285714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.285799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.286091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.286156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.286415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.286480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.286749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.286816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.287050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.287114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.287316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.287383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.287626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.287691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.287959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.288027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.288285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.288351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.288650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.288715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.289043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.289109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.289366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.289432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.289747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.289814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.290084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.290149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.290435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.290501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.290715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.290800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.291088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.291153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.291409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.291475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.291756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.291823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.292064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.292129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.003 qpair failed and we were unable to recover it. 00:26:32.003 [2024-11-15 12:48:12.292439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.003 [2024-11-15 12:48:12.292504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.292796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.292862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.293124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.293188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.293483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.293549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.293765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.293831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.294118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.294184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.294452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.294518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.294782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.294847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.295114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.295178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.295385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.295453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.295638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.295706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.296016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.296081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.296342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.296407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.296668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.296752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.297046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.297111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.297365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.297430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.297742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.297808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.298011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.298076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.298312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.298377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.298636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.298711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.298955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.299020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.299260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.299325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.299612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.299676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.299965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.300030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.300261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.300326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.300573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.300640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.300872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.300939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.301227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.301292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.301549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.301613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.301875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.301942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.302228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.302293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.302582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.302647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.302916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.302985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.303240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.303306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.303516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.303585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.303848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.303919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.304179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.004 [2024-11-15 12:48:12.304244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.004 qpair failed and we were unable to recover it. 00:26:32.004 [2024-11-15 12:48:12.304505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.304570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.304820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.304888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.305076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.305140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.305337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.305402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.305637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.305702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.305943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.306008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.306245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.306309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.306523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.306591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.306817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.306884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.307183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.307248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.307546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.307611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.307887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.307954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.308210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.308275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.308534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.308599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.308864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.308933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.309190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.309255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.309501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.309568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.309824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.309892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.310193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.310258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.005 [2024-11-15 12:48:12.310524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.005 [2024-11-15 12:48:12.310589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.005 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-15 12:48:12.310775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-15 12:48:12.310842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-15 12:48:12.311105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-15 12:48:12.311170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-15 12:48:12.311426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-15 12:48:12.311502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-15 12:48:12.311790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-15 12:48:12.311857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-15 12:48:12.312119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-15 12:48:12.312184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-15 12:48:12.312435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-15 12:48:12.312501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-15 12:48:12.312758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-15 12:48:12.312825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-15 12:48:12.313023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-15 12:48:12.313087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-15 12:48:12.313383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-15 12:48:12.313449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-15 12:48:12.313688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-15 12:48:12.313788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.314094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.314159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.314404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.314469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.314739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.314805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.315048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.315112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.315309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.315374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.315626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.315691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.316015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.316081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.316333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.316397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.316643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.316707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.317030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.317095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.317328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.317392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.317684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.317786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.318083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.318147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.318439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.318504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.318753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.318821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.319102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.319166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.319371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.319435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.319614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.319680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.319935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.320000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.320279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.320345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.320549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.320614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.320927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.320994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.321246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.321311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.321611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.321675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.321995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.322060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.322319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.322386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.322598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.322663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.322976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.323042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.323285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.323350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.323590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.323654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.323978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.324045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.324337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.324402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.324702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.324795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.325051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.325120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.325368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.325434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.325687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.325787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.326090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-15 12:48:12.326156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-15 12:48:12.326450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.326515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.326817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.326885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.327136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.327200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.327496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.327560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.327848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.327915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.328129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.328194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.328495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.328559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.328822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.328888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.329175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.329240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.329454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.329522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.329816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.329883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.330181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.330245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.330433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.330499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.330751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.330818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.331064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.331128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.331371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.331436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.331642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.331708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.331975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.332040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.332286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.332352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.332573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.332639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.332867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.332932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.333188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.333253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.333544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.333611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.333901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.333968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.334259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.334323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.334573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.334638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.334918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.334985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.335270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.335334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.335628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.335692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.335959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.336025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.336290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.336355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.336615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.336680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.336948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.337013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.337234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.337299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.337504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.337568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.337863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.337942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.338230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.338295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-15 12:48:12.338580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-15 12:48:12.338645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.338953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.339020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.339315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.339379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.339679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.339759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.340030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.340096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.340385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.340449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.340696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.340776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.341021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.341086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.341339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.341403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.341691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.341792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.342057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.342121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.342388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.342453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.342709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.342796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.343038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.343103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.343363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.343428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.343646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.343711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.343997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.344061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.344351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.344416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.344736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.344803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.345100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.345164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.345380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.345448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.345709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.345812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.346074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.346139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.346342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.346408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.346672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.346757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.347060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.347127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.347320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.347385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.347668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.347751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.348063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.348130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.348424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.348489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.348796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.348864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.349153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.349220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.349464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.349532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.349783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.349851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.350070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.350136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.350409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.350474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.350772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.350840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.351086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.351150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-15 12:48:12.351395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-15 12:48:12.351471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.351777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.351844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.352107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.352172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.352410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.352475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.352680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.352764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.353000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.353065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.353364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.353429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.353716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.353810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.354012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.354077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.354276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.354343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.354596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.354660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.354914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.354979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.355228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.355293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.355587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.355652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.355930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.355996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.356214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.356280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1135798 Killed "${NVMF_APP[@]}" "$@" 00:26:32.291 [2024-11-15 12:48:12.356575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.356640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.356873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.356939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:32.291 [2024-11-15 12:48:12.357217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.357284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:32.291 [2024-11-15 12:48:12.357549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.357615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:32.291 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:32.291 [2024-11-15 12:48:12.357937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.358004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:32.291 [2024-11-15 12:48:12.358307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.358375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.358624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.358688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.358951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.359016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.359274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.359350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.359643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.359709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.360033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.360099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.360400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.360465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.360761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.360828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.361044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.361112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.361349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.361416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.361711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.361814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.362075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.362140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.362431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.362497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.362799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.362868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.291 [2024-11-15 12:48:12.363110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.291 [2024-11-15 12:48:12.363175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.291 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.363478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.363545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1136542 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:32.292 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1136542 00:26:32.292 [2024-11-15 12:48:12.363807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.363874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1136542 ']' 00:26:32.292 [2024-11-15 12:48:12.364136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.292 [2024-11-15 12:48:12.364202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.292 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.292 [2024-11-15 12:48:12.364487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.364553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.292 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:32.292 [2024-11-15 12:48:12.364800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.364867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.365159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.365224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.365442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.365511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.365810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.365876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.366090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.366159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.366402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.366468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.366771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.366849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.367072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.367138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.367391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.367456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.367712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.367794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.368011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.368078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.368304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.368371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.368663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.368747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.369013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.369075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.369365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.369428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.369759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.369833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.370071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.370139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.370394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.370458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.292 qpair failed and we were unable to recover it. 00:26:32.292 [2024-11-15 12:48:12.370775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.292 [2024-11-15 12:48:12.370847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.371078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.371145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.371421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.371506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.371817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.371886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.372191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.372278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.372539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.372605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.372841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.372909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.373178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.373256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.373535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.373602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.373866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.373950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.374217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.374284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.374570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.374651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.374914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.374983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.375236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.375301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.375599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.375668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.376048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.376166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.376537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.376639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.376970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.377042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.377303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.377370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.377677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.377768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.378019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.378085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.378340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.378405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.378697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.378782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.379039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.379132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.379434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.379523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.379874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.379966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.380284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.380377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.380746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.380836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.381145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.381248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.381607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.381701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.382021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.382087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.382380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.382447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.382746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.382814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.383032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.293 [2024-11-15 12:48:12.383096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.293 qpair failed and we were unable to recover it. 00:26:32.293 [2024-11-15 12:48:12.383426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.383516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.383854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.383945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.384296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.384382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.384695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.384807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.385157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.385248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.385554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.385644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.385997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.386068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.386376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.386442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.386748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.386816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.387061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.387125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.387422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.387512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.387841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.387933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.388248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.388335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.388651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.388779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.389120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.389210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.389530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.389618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.389994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.390091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.390399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.390469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.390736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.390805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.391012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.391082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.391378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.391443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.391758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.391856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.392137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.392207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.392501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.392572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.392848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.392921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.393223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.393304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.393567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.393638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.393912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.393999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.394217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.394284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.394528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.394595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.394909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.394980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.395188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.395255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.395466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.395531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.395815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.395887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.396146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.396226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.396494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.396565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.396814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.396882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.294 [2024-11-15 12:48:12.397140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.294 [2024-11-15 12:48:12.397208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.294 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.397486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.397553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.397842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.397909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.398197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.398267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.398532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.398598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.398856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.398935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.399238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.399305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.399604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.399675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.399939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.400014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.400278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.400344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.400609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.400678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.400995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.401066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.401283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.401360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.401644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.401710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.401943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.402017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.402294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.402362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.402619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.402684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.402973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.403043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.403252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.403317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.403557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.403634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.403947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.404017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.404252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.404318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.404588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.404656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.404937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.405012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.405293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.405362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.405626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.405692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.405991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.406077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.406295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.406361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.406651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.406747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.407018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.407086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.407374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.407440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.407738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.407810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.408060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.408127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.408386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.408466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.408754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.408823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.409068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.409138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.409418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.409487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.409699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.409803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.295 [2024-11-15 12:48:12.410067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.295 [2024-11-15 12:48:12.410136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.295 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.410354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.410419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.410628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.410694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.411019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.411087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.411335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.411401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.411651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.411759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.412033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.412099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.412340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.412405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.412680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.412774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.413010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.413075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.413362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.413431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.413673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.413764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.413975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.414040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.414351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.414420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.414620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.414690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.415009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.415085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.415364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.415433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.415682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.415788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b9[2024-11-15 12:48:12.415780] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:26:32.296 0 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.415877] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.296 [2024-11-15 12:48:12.416054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.416119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.416336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.416401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.416702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.416792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.417052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.417116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.417320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.417399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.417749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.417820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.418080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.418162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.418441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.418508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.418822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.418890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.419095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.419165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.419384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.419450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.419774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.419848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.420127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.420197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.420414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.420481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.420779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.420850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.421074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.421140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.421388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.421471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.421685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.421778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.422071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.296 [2024-11-15 12:48:12.422137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.296 qpair failed and we were unable to recover it. 00:26:32.296 [2024-11-15 12:48:12.422404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.422473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.422748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.422829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.423099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.423167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.423445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.423511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.423745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.423814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.424017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.424086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.424324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.424390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.424628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.424700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.424978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.425050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.425303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.425370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.425600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.425683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.425969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.426036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.426286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.426361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.426635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.426704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.427029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.427095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.427417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.427485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.427748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.427817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.428040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.428121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.428391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.428456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.428712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.428816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.429113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.429182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.429470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.429548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.429832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.429901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.430113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.430179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.430487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.430555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.430855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.430923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.431174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.431243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.431494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.431564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.431830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.431914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.432147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.432215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.432480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.432546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.432843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.432913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.433142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.297 [2024-11-15 12:48:12.433207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.297 qpair failed and we were unable to recover it. 00:26:32.297 [2024-11-15 12:48:12.433495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.433563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.433821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.433893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.434135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.434200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.434468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.434538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.434835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.434903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.435162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.435233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.435569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.435636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.435871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.435938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.436187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.436266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.436494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.436563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.436794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.436869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.437148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.437218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.437504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.437568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.437877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.437963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.438231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.438298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.438566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.438642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.438994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.439063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.439344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.439413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.439664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.439751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.440061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.440131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.440392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.440458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.440714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.440812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.441098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.441166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.441398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.441466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.441786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.441856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.442061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.442131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.442347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.442427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.442691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.442794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.443102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.443172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.443448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.443515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.443772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.443840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.444142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.444209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.444454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.444519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.444806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.444890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.445124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.445191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.445486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.445557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.445787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.445856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.446121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.298 [2024-11-15 12:48:12.446188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.298 qpair failed and we were unable to recover it. 00:26:32.298 [2024-11-15 12:48:12.446402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.446484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.446753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.446822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.447079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.447148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.447415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.447497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.447784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.447854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.448072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.448142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.448419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.448487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.448694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.448790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.449095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.449163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.449457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.449526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.449839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.449919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.450168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.450236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.450508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.450576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.450811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.450880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.451157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.451225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.451478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.451543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.451838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.451912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.452160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.452226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.452448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.452514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.452750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.452820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.453072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.453141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.453392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.453465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.453756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.453809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.453946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.453973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.454075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.454102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.454193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.454219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.454315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.454341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.454466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.454493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.454610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.454636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.454753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.454782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.454910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.454937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.455063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.455091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.455181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.455209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.455317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.455350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.455463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.455490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.455578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.455606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.455706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.455742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.455858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.299 [2024-11-15 12:48:12.455885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.299 qpair failed and we were unable to recover it. 00:26:32.299 [2024-11-15 12:48:12.456008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.456035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.456120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.456147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.456263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.456291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.456401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.456427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.456512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.456543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.456633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.456660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.456785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.456812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.456896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.456923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.457043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.457070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.457182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.457212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.457294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.457321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.457432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.457466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.457564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.457594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.457707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.457745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.457860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.457899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.458035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.458073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.458194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.458220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.458311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.458337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.458457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.458482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.458600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.458625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.458726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.458753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.458848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.458873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.458966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.458991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.459135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.459160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.459248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.459273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.459375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.459400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.459496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.459522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.459636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.459661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.459770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.459797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.459910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.459935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.460053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.460077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.460152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.460177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.460321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.460346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.460464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.460489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.460576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.460601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.460709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.460742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.460830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.460856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.460948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.300 [2024-11-15 12:48:12.460973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.300 qpair failed and we were unable to recover it. 00:26:32.300 [2024-11-15 12:48:12.461068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.461093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.461205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.461235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.461345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.461370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.461479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.461503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.461593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.461618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.461732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.461771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.461896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.461924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.462025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.462050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.462137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.462163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.462288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.462313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.462398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.462424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.462536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.462562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.462695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.462729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.462823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.462848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.462937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.462963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.463088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.463113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.463200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.463228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.463306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.463333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.463421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.463446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.463484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1becf30 (9): Bad file descriptor 00:26:32.301 [2024-11-15 12:48:12.463645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.463680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.463815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.463843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.463955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.463989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.464078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.464104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.464222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.464252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.464387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.464414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.464531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.464557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.464673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.464701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.464826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.464852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.464979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.465010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.465106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.465132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.465209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.465235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.465362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.465400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.465521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.301 [2024-11-15 12:48:12.465548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.301 qpair failed and we were unable to recover it. 00:26:32.301 [2024-11-15 12:48:12.465639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.465664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.465783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.465811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.465927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.465953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.466072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.466098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.466184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.466209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.466325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.466351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.466456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.466482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.466569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.466596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.466736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.466780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.466893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.466923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.467024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.467056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.467141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.467168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.467254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.467280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.467393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.467420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.467508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.467535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.467615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.467642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.467758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.467785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.467870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.467896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.467981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.468006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.468118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.468143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.468258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.468284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.468401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.468427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.468541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.468566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.468682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.468707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.468797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.468823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.468903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.468928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.469051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.469077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.469172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.469197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.469279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.469305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.469410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.469436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.469528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.469553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.469654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.469680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.469802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.469827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.469912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.469937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.470056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.470081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.470167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.470197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.470312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.470337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.302 [2024-11-15 12:48:12.470429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.302 [2024-11-15 12:48:12.470459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.302 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.470587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.470616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.470736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.470764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.470857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.470883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.470981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.471029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.471126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.471153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.471300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.471326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.471417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.471443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.471527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.471552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.471699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.471731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.471816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.471842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.471923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.471948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.472049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.472074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.472200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.472226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.472343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.472369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.472455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.472481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.472566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.472592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.472713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.472747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.472867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.472892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.472995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.473021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.473111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.473137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.473246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.473271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.473377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.473402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.473518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.473545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.473635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.473662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.473765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.473791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.473913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.473939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.474080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.474105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.474190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.474216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.474298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.474324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.474408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.474436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.474568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.474607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.474746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.474777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.474872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.474898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.474983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.475016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.475133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.475160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.475282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.475316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.475434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.475461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.303 [2024-11-15 12:48:12.475571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.303 [2024-11-15 12:48:12.475605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.303 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.475703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.475736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.475826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.475851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.475936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.475962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.476053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.476078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.476193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.476218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.476302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.476327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.476440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.476465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.476549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.476577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.476688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.476713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.476813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.476838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.476967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.476993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.477089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.477114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.477207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.477232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.477345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.477370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.477456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.477481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.477566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.477593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.477671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.477697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.477847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.477873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.477954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.477980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.478094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.478121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.478228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.478254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.478339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.478365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.478478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.478505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.478592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.478618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.478728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.478754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.478831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.478856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.478948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.478981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.479064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.479089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.479171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.479196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.479290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.479315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.479427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.479456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.479538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.479565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.479677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.479703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.479821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.479847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.479926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.479953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.480070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.480098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.480212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.480238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.304 [2024-11-15 12:48:12.480352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.304 [2024-11-15 12:48:12.480378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.304 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.480484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.480509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.480587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.480612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.480701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.480734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.480829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.480855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.480971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.480996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.481086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.481112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.481220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.481245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.481356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.481382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.481458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.481484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.481630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.481655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.481747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.481774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.481889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.481914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.482005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.482032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.482142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.482168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.482277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.482302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.482432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.482470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.482620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.482647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.482755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.482782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.482867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.482893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.482968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.482994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.483133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.483158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.483273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.483300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.483398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.483429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.483564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.483603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.483686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.483714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.483809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.483835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.483948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.483974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.484055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.484081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.484186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.484212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.484314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.484340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.484474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.484513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.484608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.484637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.484728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.484758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.484909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.484935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.485052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.485078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.485167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.485192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.485314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.485341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.305 [2024-11-15 12:48:12.485454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.305 [2024-11-15 12:48:12.485480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.305 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.485636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.485665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.485822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.485849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.485937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.485965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.486086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.486114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.486233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.486260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.486348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.486374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.486459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.486485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.486625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.486651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.486783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.486810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.486954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.486980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.487056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.487094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.487211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.487237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.487318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.487344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.487457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.487483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.487560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.487586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.487676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.487715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.487852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.487890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.488026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.488069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.488184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.488211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.488297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.488324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.488411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.488437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.488550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.488584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.488695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.488735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.488852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.488879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.488989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.489016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.489105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.489134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.489281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.489307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.489383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.489410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.489492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.489518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.489599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.489625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.489750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.489777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.489899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.489927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.489999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.306 [2024-11-15 12:48:12.490031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.306 qpair failed and we were unable to recover it. 00:26:32.306 [2024-11-15 12:48:12.490141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.490166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.490321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.490347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.490458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.490483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.490603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.490629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.490773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.490800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.490924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.490963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.491062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.491089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.491176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.491203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.491298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.491324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.491441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.491466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.491577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.491603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.491737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.491766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.491885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.491913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.491990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.492026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.492120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.492147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.492263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.492290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.492397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.492423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.492518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.492547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.492641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.492679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.492814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.492842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.492956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.492982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.493104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.493131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.493212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.493238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.493313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.493338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.493426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.493457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.493544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.493570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.493658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.493684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.493802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.493828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.493911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.493937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.494015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.494041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.494152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.494178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.494301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.494341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.494390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:32.307 [2024-11-15 12:48:12.494467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.494496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.494610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.494638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.494758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.494785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.494878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.494903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.495026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.495055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.495174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.307 [2024-11-15 12:48:12.495199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.307 qpair failed and we were unable to recover it. 00:26:32.307 [2024-11-15 12:48:12.495292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.495319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.495467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.495493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.495577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.495610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.495728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.495754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.495827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.495853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.495993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.496027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.496110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.496136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.496256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.496282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.496362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.496392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.496483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.496509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.496619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.496645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.496736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.496762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.496851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.496876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.496966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.496993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.497084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.497111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.497251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.497277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.497373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.497399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.497484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.497509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.497597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.497624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.497709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.497745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.497869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.497896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.498021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.498048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.498138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.498164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.498256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.498281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.498363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.498390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.498470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.498496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.498581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.498614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.498699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.498738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.498849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.498876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.499017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.499042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.499143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.499170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.499282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.499309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.499448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.499474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.499563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.499594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.499677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.499702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.499835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.499861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.499978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.500003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.500145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.308 [2024-11-15 12:48:12.500170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.308 qpair failed and we were unable to recover it. 00:26:32.308 [2024-11-15 12:48:12.500260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.500285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.500394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.500419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.500574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.500600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.500682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.500707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.500822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.500847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.500926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.500954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.501091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.501117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.501215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.501249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.501326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.501351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.501444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.501469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.501600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.501641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.501740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.501768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.501879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.501918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.502046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.502073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.502171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.502197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.502292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.502324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.502442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.502467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.502548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.502587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.502714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.502745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.502861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.502887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.502972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.502997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.503130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.503156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.503234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.503260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.503355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.503381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.503481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.503525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.503682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.503716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.503836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.503863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.504005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.504032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.504130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.504157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.504288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.504316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.504400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.504426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.504540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.504567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.504687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.504726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.504841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.504867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.504988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.505023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.505113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.505138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.505254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.505280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.309 [2024-11-15 12:48:12.505391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.309 [2024-11-15 12:48:12.505417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.309 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.505504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.505538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.505652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.505678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.505828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.505855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.505971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.505996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.506103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.506130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.506231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.506257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.506374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.506400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.506546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.506572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.506668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.506693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.506798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.506825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.506942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.506967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.507084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.507110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.507253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.507278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.507373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.507399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.507531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.507556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.507671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.507696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.507814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.507853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.507952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.507986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.508070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.508096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.508298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.508323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.508449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.508474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.508556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.508581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.508692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.508734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.508875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.508900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.508988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.509013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.509093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.509119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.509235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.509260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.509360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.509386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.509475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.509503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.509661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.509712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.509848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.509875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.509966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.509993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.510191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.310 [2024-11-15 12:48:12.510217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.310 qpair failed and we were unable to recover it. 00:26:32.310 [2024-11-15 12:48:12.510303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.510328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.510410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.510436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.510562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.510587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.510710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.510742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.510826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.510852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.510971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.510997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.511116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.511147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.511265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.511290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.511429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.511455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.511542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.511568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.511670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.511697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.511849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.511893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.512018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.512046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.512167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.512192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.512277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.512306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.512427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.512452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.512542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.512568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.512681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.512714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.512821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.512846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.512956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.512981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.513084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.513109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.513195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.513221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.513336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.513365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.513450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.513477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.513593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.513620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.513742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.513769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.513850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.513875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.513963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.513989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.514072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.514098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.514228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.514266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.514387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.514413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.514527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.514553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.514693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.514737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.514852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.514877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.514993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.515020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.515134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.515159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.515283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.515309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.311 [2024-11-15 12:48:12.515408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.311 [2024-11-15 12:48:12.515436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.311 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.515528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.515555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.515670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.515695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.515815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.515841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.515929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.515955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.516106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.516132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.516244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.516270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.516351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.516377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.516503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.516529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.516647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.516672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.516775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.516802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.516939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.516965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.517079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.517105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.517242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.517268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.517388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.517419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.517537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.517563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.517678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.517704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.517804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.517830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.517948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.517973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.518095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.518121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.518216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.518242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.518315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.518340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.518428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.518453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.518563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.518589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.518675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.518710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.518836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.518862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.518978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.519003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.519148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.519174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.519298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.519325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.519411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.519436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.519551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.519579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.519665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.519692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.519793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.519819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.519937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.519963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.520055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.520081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.520191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.520218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.520356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.520381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.520465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.520491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.520608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.312 [2024-11-15 12:48:12.520634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.312 qpair failed and we were unable to recover it. 00:26:32.312 [2024-11-15 12:48:12.520726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.520754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.520848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.520887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.521042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.521075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.521225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.521251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.521335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.521360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.521451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.521479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.521592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.521619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.521744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.521771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.521884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.521911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.521991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.522016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.522100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.522126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.522247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.522272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.522382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.522411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.522525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.522551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.522631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.522666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.522795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.522821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.522937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.522962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.523079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.523104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.523188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.523214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.523332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.523357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.523475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.523500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.523584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.523609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.523690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.523715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.523837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.523865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.523981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.524007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.524133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.524159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.524242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.524267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.524383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.524409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.524518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.524543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.524621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.524646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.524765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.524791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.524935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.524961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.525100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.525126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.525241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.525267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.525390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.525417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.525505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.525530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.525644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.525669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.313 [2024-11-15 12:48:12.525799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.313 [2024-11-15 12:48:12.525825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.313 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.525936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.525963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.526093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.526119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.526209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.526236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.526330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.526355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.526451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.526482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.526593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.526619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.526733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.526761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.526846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.526872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.526962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.526989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.527101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.527126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.527221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.527248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.527358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.527384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.527473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.527499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.527617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.527643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.527754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.527781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.527864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.527890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.528007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.528043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.528159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.528184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.528316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.528341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.528456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.528482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.528568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.528600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.528740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.528767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.528851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.528876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.528989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.529014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.529154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.529179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.529260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.529287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.529366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.529392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.529507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.529534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.529625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.529650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.529748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.529774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.529889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.529916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.530044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.530070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.530163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.530190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.530276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.530303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.530384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.530410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.530543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.530582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.530700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.530745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-15 12:48:12.530852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-15 12:48:12.530878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.530994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.531021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.531110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.531135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.531224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.531249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.531390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.531417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.531497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.531532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.531622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.531648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.531764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.531796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.531935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.531960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.532041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.532066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.532210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.532236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.532325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.532350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.532467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.532491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.532580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.532607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.532691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.532728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.532842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.532868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.532982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.533008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.533145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.533171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.533287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.533312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.533403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.533428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.533503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.533528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.533646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.533672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.533775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.533804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.533891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.533918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.534028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.534057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.534179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.534205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.534317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.534345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.534436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.534464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.534573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.534601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.534725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.534752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.534864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.534890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.535006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.535042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.535121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.535147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.535231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.535257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-15 12:48:12.535341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-15 12:48:12.535372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.535485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.535510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.535634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.535675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.535805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.535834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.535950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.535976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.536063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.536090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.536216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.536244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.536361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.536389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.536475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.536502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.536623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.536649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.536761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.536787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.536869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.536895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.537009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.537046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.537135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.537161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.537280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.537306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.537412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.537437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.537524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.537550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.537685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.537711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.537842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.537868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.537960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.537986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.538103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.538129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.538264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.538290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.538403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.538429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.538565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.538604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.538730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.538759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.538849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.538877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.539015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.539049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.539159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.539191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.539306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.539332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.539441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.539468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.539544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.539570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.539710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.539755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.539840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.539866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.539958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.539984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.540078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.540104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.540191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.540217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.540359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.540385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.540474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.540501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-15 12:48:12.540622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-15 12:48:12.540648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.540733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.540760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.540834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.540860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.540953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.540980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.541076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.541104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.541221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.541247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.541385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.541411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.541527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.541554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.541645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.541670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.541804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.541834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.541919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.541946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.542096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.542122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.542240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.542267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.542460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.542487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.542599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.542625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.542760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.542788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.542890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.542930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.543016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.543043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.543197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.543224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.543310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.543336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.543484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.543511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.543603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.543630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.543739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.543765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.543851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.543877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.543962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.543988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.544102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.544129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.544219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.544246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.544327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.544352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.544492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.544518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.544604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.544647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.544774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.544801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.544893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.544918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.545000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.545027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.545138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.545164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.545305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.545330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.545440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.545467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.545581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.545607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.545709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-15 12:48:12.545744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-15 12:48:12.545891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.545917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.546028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.546054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.546159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.546186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.546267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.546293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.546413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.546440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.546571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.546598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.546714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.546746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.546860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.546886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.546997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.547035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.547132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.547159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.547243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.547269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.547384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.547410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.547498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.547526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.547613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.547640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.547740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.547768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.547858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.547884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.548027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.548054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.548169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.548195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.548288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.548314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.548434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.548473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.548595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.548622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.548710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.548746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.548850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.548875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.548963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.548989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.549109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.549134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.549246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.549273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.549391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.549417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.549564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.549590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.549715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.549750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.549832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.549857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.549945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.549971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.550062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.550093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.550242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.550267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.550379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.550405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.550488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.550513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.550632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.550661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-15 12:48:12.550781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-15 12:48:12.550808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.550920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.550945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.551040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.551065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.551182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.551207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.551318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.551344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.551485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.551511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.551590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.551615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.551715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.551749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.551832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.551858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.551940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.551965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.552106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.552132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.552246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.552274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.552392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.552418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.552517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.552542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.552631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.552656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.552746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.552772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.552914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.552940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.553021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.553047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.553140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.553165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.553303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.553330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.553438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.553463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.553546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.553572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.553659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.553692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.553816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.553845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.553937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.553962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.554069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.554095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.554177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.554202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.554324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.554350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.554434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.554460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.554600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.554626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-15 12:48:12.554712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-15 12:48:12.554745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.554862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.554888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.555004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.555030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.555147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.555173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.555259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.555287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.555435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.555464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.555589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.555615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.555734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.555761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.555844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.555871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.555988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.556014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.556099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.556133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.556226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.556251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.556327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.556353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.556434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.556460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.556541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.556566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.556677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.556702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.556798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.556824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.556900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.556925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.557007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.557032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.557109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.557147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.557226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.557254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.557338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.557363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.557447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.557473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.557555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.557580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.557660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.557685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.557782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.557807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.557919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.557944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b9[2024-11-15 12:48:12.557933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.320 0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.557966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.320 [2024-11-15 12:48:12.557981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.320 [2024-11-15 12:48:12.557993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.320 [2024-11-15 12:48:12.558004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.320 [2024-11-15 12:48:12.558089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.558113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.558226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-15 12:48:12.558251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-15 12:48:12.558369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.558397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.558487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.558523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.558641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.558667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.558790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.558816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.558899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.558925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.559049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.559074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.559185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.559210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.559325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.559350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.559483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.559509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.559586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.559612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.559702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.559733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.559667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:32.321 [2024-11-15 12:48:12.559748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:32.321 [2024-11-15 12:48:12.559854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.559800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:32.321 [2024-11-15 12:48:12.559804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:32.321 [2024-11-15 12:48:12.559881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.559970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.559994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.560093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.560118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.560216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.560242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.560325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.560350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.560446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.560471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.560581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.560607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.560691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.560724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.560823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.560848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.560928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.560955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.561046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.561071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.561207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.561248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.561340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.561367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.561450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.561484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.561605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.561630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.561735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.561761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.561846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.561873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.561961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.561989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.562085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.562116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.562204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.562236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.562329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.562355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.562467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.562494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.562608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-15 12:48:12.562635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-15 12:48:12.562748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.562775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.562866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.562892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.562972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.562998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.563081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.563106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.563214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.563249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.563334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.563359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.563449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.563475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.563603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.563643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.563746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.563774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.563864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.563892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.563985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.564012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.564089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.564115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.564196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.564228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.564304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.564329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.564406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.564432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.564546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.564586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.564679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.564707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.564838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.564864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.564950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.564976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.565064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.565090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.565297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.565326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.565412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.565437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.565542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.565568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.565651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.565677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.565807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.565833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.565916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.565942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.566029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.566054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.566205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.566230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.566310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.566336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.566428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.566456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.566547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.566573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.566666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.566696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.566790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-15 12:48:12.566817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-15 12:48:12.566905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.566939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.567018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.567045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.567121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.567158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.567257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.567282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.567401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.567427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.567547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.567574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.567658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.567684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.567787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.567815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.567933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.567960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.568066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.568092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.568172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.568197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.568299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.568325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.568404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.568430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.568506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.568531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.568637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.568677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.568784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.568814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.568897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.568924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.569002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.569038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.569133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.569160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.569285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.569312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.569395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.569422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.569525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.569564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.569768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.569797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.569880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.569905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.569980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.570006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.570125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.570151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.570264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.570290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.570378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.570408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.570488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.570514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.570599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.570625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.570706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.570748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.570838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.570863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-15 12:48:12.570939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-15 12:48:12.570964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.571056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.571084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.571178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.571205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.571310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.571349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.571475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.571502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.571588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.571614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.571700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.571733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.571826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.571855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.571939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.571973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.572061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.572096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.572213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.572239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.572331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.572358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.572496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.572523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.572607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.572632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.572728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.572755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.572833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.572860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.572940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.572967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.573060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.573093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.573191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.573219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.573307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.573336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.573449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.573475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.573566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.573600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.573760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.573787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.573876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.573902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.573982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.574008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.574097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.574124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.574204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.574230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.574316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.574341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.574441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.574468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.574552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.574577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.574731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-15 12:48:12.574758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-15 12:48:12.574844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.574871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.574963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.575002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.575112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.575138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.575260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.575288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.575486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.575514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.575593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.575621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.575703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.575741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.575855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.575881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.575962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.575988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.576073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.576101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.576189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.576215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.576307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.576333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.576412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.576441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.576553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.576581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.576662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.576690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.576796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.576823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.576906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.576932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.577023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.577055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.577146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.577172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.577255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.577281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.577366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.577394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.577485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.577512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-15 12:48:12.577585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-15 12:48:12.577611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.577751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.577778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.577893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.577918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.577996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.578021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.578109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.578136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.578256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.578285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.578369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.578395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.578510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.578538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.578618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.578643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.578753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.578780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.578864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.578889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.578966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.578992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.579088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.579114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.579199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.579224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.579308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.579335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.579421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.579448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.579535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.579563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.579644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.579670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.579786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.579825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.579922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.579949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.580090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.580116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.580199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.580224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.580310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.580337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.580422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.580450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.580563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.580591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.580670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.580697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.580825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.580853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.580945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.580971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.581067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.581094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.581169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.581195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.581287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.581314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.581411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.581449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.581539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.581566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.581648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.581673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.581802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.581828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.581912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.581938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.582048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.582076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-15 12:48:12.582166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-15 12:48:12.582192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.582283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.582312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.582389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.582415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.582529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.582554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.582634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.582660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.582750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.582779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.582874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.582902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.582990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.583024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.583105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.583132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.583271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.583298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.583381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.583408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.583523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.583550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.583643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.583671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.583776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.583802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.583892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.583919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.583997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.584034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.584121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.584147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.584233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.584262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.584354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.584381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.584501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.584527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.584614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.584640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.584729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.584755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.584832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.584858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.584939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.584966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.585053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.585085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.585177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.585214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.585335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.585363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.585454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.585482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.585590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.585616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.585697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.585741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.585831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.585857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.585939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.585966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-15 12:48:12.586058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-15 12:48:12.586085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.586204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.586233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.586362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.586401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.586492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.586521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.586605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.586632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.586715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.586749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.586834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.586861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.586947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.586973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.587106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.587133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.587209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.587235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.587351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.587378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.587459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.587485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.587569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.587597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.587681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.587715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.587806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.587832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.587915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.587941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.588028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.588056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.588142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.588170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.588286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.588314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.588397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.588422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.588501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.588531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.588642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.588668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.588780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.588808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.588888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.588915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.589000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.589033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.589144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.589170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.589251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.589276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.589357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.589384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.589467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.589493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.589584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.589609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.589695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.589729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.589806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.589832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.589915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.589941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.590025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.590057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.590145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.590171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.590261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.590289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.590407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.590435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-15 12:48:12.590521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-15 12:48:12.590547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.590629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.590654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.590752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.590779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.590870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.590896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.590975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.591000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.591100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.591130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.591242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.591269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.591341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.591366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.591448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.591473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.591550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.591575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.591696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.591730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.591823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.591848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.591931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.591956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.592058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.592083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.592166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.592192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.592300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.592325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.592406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.592433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.592528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.592568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.592704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.592743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.592824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.592850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.592937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.592963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.593051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.593078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.593155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.593181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.593296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.593328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.593413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.593439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.593524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.593550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.593640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.593667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.593764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.593792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.593902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.593929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.594026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.594054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.594169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.594195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.594275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.594302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.594384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.594411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.594494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.594519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.594630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.594656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.594764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.594791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-15 12:48:12.594882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-15 12:48:12.594921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.595020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.595048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.595127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.595153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.595248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.595274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.595361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.595390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.595478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.595506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.595626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.595652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.595782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.595807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.595891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.595916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.595997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.596030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.596146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.596170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.596246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.596272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.596386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.596412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.596497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.596525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.596646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.596695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.596802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.596830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.596921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.596948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.597071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.597098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.597181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.597207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.597314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.597340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.597452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.597479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.597567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.597597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.597693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.597725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.597815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.597841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.597927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.597953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.598064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.598090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.598166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.598191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.598270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.598296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.598387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.598415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.598533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.598561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.598636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.598662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.598748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.598775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.598884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.598910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.599026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-15 12:48:12.599054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-15 12:48:12.599158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.599186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.599299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.599325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.599410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.599436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.599544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.599570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.599660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.599687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.599794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.599821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.599906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.599933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.600017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.600048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.600136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.600162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.600276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.600303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.600392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.600418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.600497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.600523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.600610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.600636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.600724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.600751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.600830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.600856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.600940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.600966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.601058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.601085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.601172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.601198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.601307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.601333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.601439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.601465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.601543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.601574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.601661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.601689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.601802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.601831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.601919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.601949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.602074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.602099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.602177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.602203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.602290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.602317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.602438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.602464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.602541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.602567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.602675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.602711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.602806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.602833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.602912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.602938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-15 12:48:12.603029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-15 12:48:12.603054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.603133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.603159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.603243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.603270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.603352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.603379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.603466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.603491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.603578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.603606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.603688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.603726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.603806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.603832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.604024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.604050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.604167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.604195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.604290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.604329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.604410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.604439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.604551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.604576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.604669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.604695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.604784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.604811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.604897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.604926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.605024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.605050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.605135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.605161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.605241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.605267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.605373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.605398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.605483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.605511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.605598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.605624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.605704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.605741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.605827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.605853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.605933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.605959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.606049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.606075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.606172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.606198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.606276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.606304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.606439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.606482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.606579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.606618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.606705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.606743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.606819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.606845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.606938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.606965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.607044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.607070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-15 12:48:12.607160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-15 12:48:12.607188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.607308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.607337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.607426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.607457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.607553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.607580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.607671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.607697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.607792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.607820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.607909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.607935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.608034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.608060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.608140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.608167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.608262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.608288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.608372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.608400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.608491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.608517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.608611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.608639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.608736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.608764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.608848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.608875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.608957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.608983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.609069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.609096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.609179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.609205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.609320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.609348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.609433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.609459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.609546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.609571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.609680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.609736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.609820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.609846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.609928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.609954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.610041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.610066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.610155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.610180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.610264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.610290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.610374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.610400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.610486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.610514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.610601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.610631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.610728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.610757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.610839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.610865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.610975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.611001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.611089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.611114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.611201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.611229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.615 [2024-11-15 12:48:12.611327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.615 [2024-11-15 12:48:12.611353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.615 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.611428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.611454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.611534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.611562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.611647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.611676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.611775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.611803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.611883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.611910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.612001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.612036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.612119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.612144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.612226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.612251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.612330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.612356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.612434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.612459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.612547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.612574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.612687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.612715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.612819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.612846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.612925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.612952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.613038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.613071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.613183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.613209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.613288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.613314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.613401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.613427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.613510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.613538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.613620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.613647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.613743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.613770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.613855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.613881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.613995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.614024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.614101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.614128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.614215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.614241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.614358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.614390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.614470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.614496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.614575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.614601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.614684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.614732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.614830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.614856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.614933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.614960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.615084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.615110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.615187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.615213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.615294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.615318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.615431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.615458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.615539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.615566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.615664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.615702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.616 qpair failed and we were unable to recover it. 00:26:32.616 [2024-11-15 12:48:12.615824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.616 [2024-11-15 12:48:12.615851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.615933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.615960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.616057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.616084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.616167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.616195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.616275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.616300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.616377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.616402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.616478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.616504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.616577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.616603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.616681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.616707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.616803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.616829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.616912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.616938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.617028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.617054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.617171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.617198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.617280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.617309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.617391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.617419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.617504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.617532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.617641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.617667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.617769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.617796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.617880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.617906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.617992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.618028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.618109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.618135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.618217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.618244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.618327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.618352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.618432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.618457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.618572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.618597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.618670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.618697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.618796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.618824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.618905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.618932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.619049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.619084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.619280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.619306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.619389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.619416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.619552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.619579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.619651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.619677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.619789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.619828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.619953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.619981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.620068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.620093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.620178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.620206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.620321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.620347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.617 [2024-11-15 12:48:12.620440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.617 [2024-11-15 12:48:12.620465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.617 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.620570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.620596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.620683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.620711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.620800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.620825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.620919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.620946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.621084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.621110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.621196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.621221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.621305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.621331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.621408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.621434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.621542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.621567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.621768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.621798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.621917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.621944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.622039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.622073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.622175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.622202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.622276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.622302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.622378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.622404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.622489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.622514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.622595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.622625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.622731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.622770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.622869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.622897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.622977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.623003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.623085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.623111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.623200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.623227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.623316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.623347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.623432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.623459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.623565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.623591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.623676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.623701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.623795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.623820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.623904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.623930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.624082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.624110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.624199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.624233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.624340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.624367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.624447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.624473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.624563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.624589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.624665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.624696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.624800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.624826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.624913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.624938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.625026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.625052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.618 [2024-11-15 12:48:12.625141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.618 [2024-11-15 12:48:12.625166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.618 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.625254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.625282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.625399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.625426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.625508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.625537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.625625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.625651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.625746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.625773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.625870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.625908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.625992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.626019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.626097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.626124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.626203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.626230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.626312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.626339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.626431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.626464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.626564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.626592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.626669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.626696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.626848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.626874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.626953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.626979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.627059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.627085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.627166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.627192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.627277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.627306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.627434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.627473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.627565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.627593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.627684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.627711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.627803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.627830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.627937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.627963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.628045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.628071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.628153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.628179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.628263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.619 [2024-11-15 12:48:12.628301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.619 qpair failed and we were unable to recover it. 00:26:32.619 [2024-11-15 12:48:12.628403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.628430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.628521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.628550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.628633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.628661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.628779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.628806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.628893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.628930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.629026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.629054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.629258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.629289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.629380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.629407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.629499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.629526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.629613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.629639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.629781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.629810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.629896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.629923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.630009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.630035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.630126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.630151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.630293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.630318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.630434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.630460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.630538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.630563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.630680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.630709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.630806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.630833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.630920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.630947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.631041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.631068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.631176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.631203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.631294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.631324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.631409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.631436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.631531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.631560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.631649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.631676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.631800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.631829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.631911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.631937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.632019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.632051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.632139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.632166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.632252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.632278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.632391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.632418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.632508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.632539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.632639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.632666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.632756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.632784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.632869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.632894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.620 [2024-11-15 12:48:12.632968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.620 [2024-11-15 12:48:12.632993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.620 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.633076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.633104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.633189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.633215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.633292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.633317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.633397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.633423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.633539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.633577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.633703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.633741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.633827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.633855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.633936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.633963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.634052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.634077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.634194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.634228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.634329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.634355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.634471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.634497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.634581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.634606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.634683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.634708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.634797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.634823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.634908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.634933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.635015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.635040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.635154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.635179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.635258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.635282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.635363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.635388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.635468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.635494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.635607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.635634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.635726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.635756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.635855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.635893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.636098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.636135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.636264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.636301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.636421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.636458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.636550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.636577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.636672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.636697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.636787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.636813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.636896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.636921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.637029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.637055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.637137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.637162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.637238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.637263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.637356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.637381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.637487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.637513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.637603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.637639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.637749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.621 [2024-11-15 12:48:12.637787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.621 qpair failed and we were unable to recover it. 00:26:32.621 [2024-11-15 12:48:12.637887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.637916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.638008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.638034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.638121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.638147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.638257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.638283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.638396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.638423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.638535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.638560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.638635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.638661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.638746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.638772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.638857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.638882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.638975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.639000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.639084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.639109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.639211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.639235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.639331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.639363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.639464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.639491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.639582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.639615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.639700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.639739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.639825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.639852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.639944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.639976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.640063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.640089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.640175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.640204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.640295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.640321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.640437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.640465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.640580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.640605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.640700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.640739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.640833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.640866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.640972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.641006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.641094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.641121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.641203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.641231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.641310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.641336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.641443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.641470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.641556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.641585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.641696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.641733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.641853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.641879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.641967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.641995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.642085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.642111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.642190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.642217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.642329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.642355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.642443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.622 [2024-11-15 12:48:12.642468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.622 qpair failed and we were unable to recover it. 00:26:32.622 [2024-11-15 12:48:12.642579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.642605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.642690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.642715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.642835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.642860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.642949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.642976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.643062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.643087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.643166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.643191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.643277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.643305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.643417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.643447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.643656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.643688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.643794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.643822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.643939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.643974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.644110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.644145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.644266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.644293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.644373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.644400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.644511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.644539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.644623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.644650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.644744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.644772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.644854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.644881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.644959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.644985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.645066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.645099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.645211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.645237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.645326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.645356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.645445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.645472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.645588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.645616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.645756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.645783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.645870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.645896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.645980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.646006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.646087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.646118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.646199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.646225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.646307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.646333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.646444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.646473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.646557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.646584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.646700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.646736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.646820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.646846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.646924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.646949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.647032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.647056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.623 [2024-11-15 12:48:12.647142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.623 [2024-11-15 12:48:12.647166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.623 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.647244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.647268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.647378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.647403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.647489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.647514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.647617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.647642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.647739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.647775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.647898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.647926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.648016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.648042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.648152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.648178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.648265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.648291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.648365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.648390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.648476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.648503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.648596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.648635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.648734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.648767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.648879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.648907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.648988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.649014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.649091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.649117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.649235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.649262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.649341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.649372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.649463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.649490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.649601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.649627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.649708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.649741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.649819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.649844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.649922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.649948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.650058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.650083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.650196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.650222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.650310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.650341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.650425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.650453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.650563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.624 [2024-11-15 12:48:12.650589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.624 qpair failed and we were unable to recover it. 00:26:32.624 [2024-11-15 12:48:12.650677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.650702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.650800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.650827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.650913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.650938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.651028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.651055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.651145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.651171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.651249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.651275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.651380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.651405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.651542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.651568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.651665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.651697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.651787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.651813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.651907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.651933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.652006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.652031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.652116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.652142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.652250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.652275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.652380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.652405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.652495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.652520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.652628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.652668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.652810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.652837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.652921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.652946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.653032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.653057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.653173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.653198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.653271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.653296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.653381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.653406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.653519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.653549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.653640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.653670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.653785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.653813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.653901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.653927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.654010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.654036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.654163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.654187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.654305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.654332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.654423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.654448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.654530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.654555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.654643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.654668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.654763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.654790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.654902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.654928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.655039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.655064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.655146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.655171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.655259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.655285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.625 [2024-11-15 12:48:12.655414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.625 [2024-11-15 12:48:12.655440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.625 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.655526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.655551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.655637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.655662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.655755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.655781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.655860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.655885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.655975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.656001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.656116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.656142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.656229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.656254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.656363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.656389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.656499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.656525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.656615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.656640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.656729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.656755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.656837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.656863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.656947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.656972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.657080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.657105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.657188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.657215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.657343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.657369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.657451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.657477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.657594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.657624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.657735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.657761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.657850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.657876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.657958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.657985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.658059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.658085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.658166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.658191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.658268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.658294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.658380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.658406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.658486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.658512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.658590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.658616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.658727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.658768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.658872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.658910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.658989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.659017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.659134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.659160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.659252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.659278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.659365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.659392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.659475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.659500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.659576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.659602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.659675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.659701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.659824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.659850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.626 [2024-11-15 12:48:12.659937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.626 [2024-11-15 12:48:12.659964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.626 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.660050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.660076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.660160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.660186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.660264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.660290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.660381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.660408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.660507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.660546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.660631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.660660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.660755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.660789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.660874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.660901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.660988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.661018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.661100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.661125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.661208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.661235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.661320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.661346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.661433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.661460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.661546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.661572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.661656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.661682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.661798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.661824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.661906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.661932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.662012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.662038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.662150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.662175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.662261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.662287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.662373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.662399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.662473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.662500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.662628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.662653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.662766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.662793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.662883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.662909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.662985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.663010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.663095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.663120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.663201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.663227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.663334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.663360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.663447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.663486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.663610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.663637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.663753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.663780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.663886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.663912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.663996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.664022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.664108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.664133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.664223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.664249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.664334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.664361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.627 [2024-11-15 12:48:12.664442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.627 [2024-11-15 12:48:12.664467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.627 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.664553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.664580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.664708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.664741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.664835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.664860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.664942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.664967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.665048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.665073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.665151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.665177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.665258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.665283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.665365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.665390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.665472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.665498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.665583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.665610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.665692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.665725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.665819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.665845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.665928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.665954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.666062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.666088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.666193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.666218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.666305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.666330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.666414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.666439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.666549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.666575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.666688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.666715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.666805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.666831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.666917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.666943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.667026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.667051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.667137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.667163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.667243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.667268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.667339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.667363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.667473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.667498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.667598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.667636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.667738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.667766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.667971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.668004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.668118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.668145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.668242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.668278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.668376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.668403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.668511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.668538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.668634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.668664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.668759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.668786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.668879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.668915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.669011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.669037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.628 [2024-11-15 12:48:12.669114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.628 [2024-11-15 12:48:12.669140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.628 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.669224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.669249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.669337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.669367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.669485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.669515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.669632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.669659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.669739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.669767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.669897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.669923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.670007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.670033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.670120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.670145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.670221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.670247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.670334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.670359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.670475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.670500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.670621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.670646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.670729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.670755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.670836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.670861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.670940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.670965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.671050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.671075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.671157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.671182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.671261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.671286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.671369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.671394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.671475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.671500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.671593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.671620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.671708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.671746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.671839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.671877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.671971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.671998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.672078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.672110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.672197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.672223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.672319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.672348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.672463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.672490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.672576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.672603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.672686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.672712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.672814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.672841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.672917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.672942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.673055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.629 [2024-11-15 12:48:12.673082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.629 qpair failed and we were unable to recover it. 00:26:32.629 [2024-11-15 12:48:12.673211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.673236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.673328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.673355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.673445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.673473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.673556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.673582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.673692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.673725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.673813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.673839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.673923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.673949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.674041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.674067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.674146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.674172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.674268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.674306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.674399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.674427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.674516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.674542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.674659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.674685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.674778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.674804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.674895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.674920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.675005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.675032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.675115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.675140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.675216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.675243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.675332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.675361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.675440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.675466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.675555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.675581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.675656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.675683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.675786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.675813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.675898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.675924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.676001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.676027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.676107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.676132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.676215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.676243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.676333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.676359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.676441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.676466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.676554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.676579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.676653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.676681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.676769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.676796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.676888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.676914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.677006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.677032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.677114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.677139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.677224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.677250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.677332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.677358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.677437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.630 [2024-11-15 12:48:12.677463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.630 qpair failed and we were unable to recover it. 00:26:32.630 [2024-11-15 12:48:12.677551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.677580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.677671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.677704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.677827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.677873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.677976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.678003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.678091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.678119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.678233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.678258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.678343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.678369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.678473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.678506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.678599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.678624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.678708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.678749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.678827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.678853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.678937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.678965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.679055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.679081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.679160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.679185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.679270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.679296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.679373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.679404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.679495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.679523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.679615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.679654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.679752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.679780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.679862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.679888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.679970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.680000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.680083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.680108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.680197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.680224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.680311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.680339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.680421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.680446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.680528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.680557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.680650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.680676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.680772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.680799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.680879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.680905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.680988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.681013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.681092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.681118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.681198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.681223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.681297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.681322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.681409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.681434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.681518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.681547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.681629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.681656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.681752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.681779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.631 [2024-11-15 12:48:12.681862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.631 [2024-11-15 12:48:12.681887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.631 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.681974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.682000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.682082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.682108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.682182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.682207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.682291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.682317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.682411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.682450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.682546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.682574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.682655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.682681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.682772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.682798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.682908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.682934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.683015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.683045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.683164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.683189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.683303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.632 [2024-11-15 12:48:12.683329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.683455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.683480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.683567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.683596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:32.632 [2024-11-15 12:48:12.683674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.683700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.683793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.683818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.683900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.683925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:32.632 [2024-11-15 12:48:12.684017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.684052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.684172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.684201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.684286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:32.632 [2024-11-15 12:48:12.684314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.684396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.684422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.684516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.684552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:32.632 [2024-11-15 12:48:12.684666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.684693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.684784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.684811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.684890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.684916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.684992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.685017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.685098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.685124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.685211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.685237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.685317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.685343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.685424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.685449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.685521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.685547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.685619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.685644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.685768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.685810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.685919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.685945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.686036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.686062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.686140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.632 [2024-11-15 12:48:12.686166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.632 qpair failed and we were unable to recover it. 00:26:32.632 [2024-11-15 12:48:12.686248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.686273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.686359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.686385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.686493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.686519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.686604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.686631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.686727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.686754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.686836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.686861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.686975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.687000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.687082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.687109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.687202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.687228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.687312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.687339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.687444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.687470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.687551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.687591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.687680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.687706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.687799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.687825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.687904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.687929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.688021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.688047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.688125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.688150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.688231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.688257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.688395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.688421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.688498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.688523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.688603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.688630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.688709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.688750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.688833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.688859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.688934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.688960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.689076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.689102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.689220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.689245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.689335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.689361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.689447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.689475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.689566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.689591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.689702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.689734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.689828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.689853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.689935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.689961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.690054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.633 [2024-11-15 12:48:12.690079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.633 qpair failed and we were unable to recover it. 00:26:32.633 [2024-11-15 12:48:12.690164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.690190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.690273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.690298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.690375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.690400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.690507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.690533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.690614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.690639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.690745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.690784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.690876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.690903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.691007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.691035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.691112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.691138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.691246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.691278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.691374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.691400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.691482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.691509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.691593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.691619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.691736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.691768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.691861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.691887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.691974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.692001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.692114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.692141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.692220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.692246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.692367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.692404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.692494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.692520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.692600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.692627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.692734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.692761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.692847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.692873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.692979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.693004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.693088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.693115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.693198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.693225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.693305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.693333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.693430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.693469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.693590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.693627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.693761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.693791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.693911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.693947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.694044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.694074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.694169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.694196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.694290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.694319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.694434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.694459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.694543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.694575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.694668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.694696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.694817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.634 [2024-11-15 12:48:12.694844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.634 qpair failed and we were unable to recover it. 00:26:32.634 [2024-11-15 12:48:12.694929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.694956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.695044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.695071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.695180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.695206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.695288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.695313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.695430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.695459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.695545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.695572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.695661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.695688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.695780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.695809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.695903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.695931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.696013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.696039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.696116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.696145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.696266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.696294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.696400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.696427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.696519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.696549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.696645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.696671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.696777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.696803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.696887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.696912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.696996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.697021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.697097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.697126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea0c000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.697233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.697261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.697375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.697412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.697551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.697589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.697687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.697713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.697814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.697839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.697925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.697950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.698065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.698091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.698170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.698195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.698304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.698333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.698425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.698450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.698565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.698591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.698681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.698707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.698808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.698836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.698924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.698950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.699031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.699059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.699150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.699177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.699259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.699284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.699409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.699435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.699516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.635 [2024-11-15 12:48:12.699541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.635 qpair failed and we were unable to recover it. 00:26:32.635 [2024-11-15 12:48:12.699618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.699644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.699729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.699755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.699841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.699866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.699981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.700008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.700087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.700112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.700194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.700219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.700328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.700353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.700468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.700493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.700576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.700601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.700681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.700712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.700798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.700824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.700903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.700929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.701009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.701034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.701140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.701165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.701242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.701268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.701376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.701401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.701495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.701523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.701639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.701664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.701771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.701810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.701901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.701927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.702010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.702036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.702149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.702174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.702264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.702289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.702372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.702397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.702517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.702544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.702671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.702699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.702795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.702821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.702900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.702926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.703010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.703048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.703135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.703161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.703280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.703306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.703389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.703415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.703522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.703554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.703641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.703667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.703746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.703773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.703851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.703878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.703957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.703988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.704107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.704132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.636 [2024-11-15 12:48:12.704216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.636 [2024-11-15 12:48:12.704242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.636 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.704323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.704348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.704424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.704449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.704539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.704564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.704670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.704695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.704800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.704826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.704901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.704926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.705007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.705033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.705115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.705141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.705220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.705245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.705322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.705347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.705450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.705475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.705592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.705618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.705725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.705751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.705842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.705867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.705948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.705973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.706078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.706104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.706191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.706217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.706303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.706329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.706411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.706437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.706513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.706539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.706611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.706637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.706731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.706757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.706839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.706865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.706943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.706968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.707080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.707105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.707189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.707215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.707331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.707357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.707442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.707467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.707548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.707573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.707655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.707680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.707782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.707809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.707891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.707916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.708026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.708054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.637 [2024-11-15 12:48:12.708150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.708184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.708281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.708328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.708435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.708466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:32.637 [2024-11-15 12:48:12.708551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.637 [2024-11-15 12:48:12.708579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.637 qpair failed and we were unable to recover it. 00:26:32.637 [2024-11-15 12:48:12.708667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.708694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.708820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.638 [2024-11-15 12:48:12.708858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.708950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.708979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.638 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.709076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.709102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.709189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.709214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.709287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.709312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.709394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.709419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.709497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.709522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.709634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.709659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.709753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.709802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.709908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.709944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.710048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.710090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.710190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.710224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.710305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.710331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.710410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.710436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.710515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.710542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.710622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.710647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.710739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.710764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.710854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.710879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.710962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.710988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.711080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.711105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.711192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.711217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.711297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.711322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.711461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.711486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.711572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.711598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.711674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.711700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.711804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.711829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.711904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.711930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.712037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.712063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.712174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.712199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.712316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.712341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.712422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.638 [2024-11-15 12:48:12.712447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.638 qpair failed and we were unable to recover it. 00:26:32.638 [2024-11-15 12:48:12.712534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.712566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.712670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.712700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.712819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.712846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.712926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.712952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.713037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.713063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.713166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.713193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.713287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.713314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.713399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.713429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.713510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.713535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.713612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.713637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.713753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.713780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.713875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.713900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.713975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.714001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.714081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.714106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.714180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.714206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.714323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.714348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.714436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.714467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.714562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.714593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.714699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.714735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.714837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.714864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.714947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.714975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.715105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.715131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.715208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.715234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.715346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.715371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.715480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.715506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.715591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.715616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.715706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.715738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.715861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.715887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.715966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.715991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.716077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.716103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.716194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.716231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.716335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.716364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.716454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.716479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.716557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.716585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.716685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.716715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.716834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.716867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.716959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.716986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.717084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.639 [2024-11-15 12:48:12.717109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.639 qpair failed and we were unable to recover it. 00:26:32.639 [2024-11-15 12:48:12.717200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.717225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.717313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.717339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.717422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.717447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.717534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.717559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.717668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.717696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.717808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.717840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.717962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.717999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.718143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.718171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.718255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.718280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.718362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.718387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.718514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.718540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.718665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.718694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.718782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.718809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.718890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.718917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.718996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.719030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.719118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.719144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.719234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.719263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.719381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.719407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.719483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.719509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.719615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.719640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.719729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.719755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.719849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.719875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.719959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.719984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.720074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.720104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.720216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.720241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.720322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.720347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.720429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.720454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.720544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.720569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.720661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.720690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.720815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.720845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.720978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.721005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.721095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.721121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.721242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.721270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.721379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.721404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.721498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.721524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.721608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.721633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.721741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.721767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.721873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.721899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.640 [2024-11-15 12:48:12.721979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.640 [2024-11-15 12:48:12.722004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.640 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.722090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.722115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.722199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.722224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.722310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.722335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.722409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.722434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.722546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.722571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.722681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.722706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.722808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.722834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.722949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.722974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.723064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.723090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.723168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.723193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.723276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.723301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.723443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.723472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.723550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.723575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.723658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.723683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.723811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.723837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.723925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.723950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.724029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.724058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.724133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.724158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.724275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.724300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.724414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.724439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.724551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.724576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.724655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.724680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.724767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.724793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.724881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.724906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.725020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.725045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.725134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.725159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.725242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.725267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.725377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.725402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.725477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.725502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.725587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.725620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.725702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.725737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.725881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.725917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.726020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.726046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.726125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.726149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.726228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.726253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.726390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.726415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.726534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.726559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.726678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.641 [2024-11-15 12:48:12.726703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.641 qpair failed and we were unable to recover it. 00:26:32.641 [2024-11-15 12:48:12.726835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.726865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.726977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.727003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.727096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.727121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.727203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.727228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.727352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.727377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.727457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.727483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.727610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.727636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.727729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.727755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.727843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.727868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.727948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.727973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.728063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.728088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.728176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.728202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.728283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.728308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.728447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.728473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.728566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.728595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.728690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.728728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.728885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.728912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.729033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.729069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.729169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.729198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.729278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.729304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.729385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.729412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.729521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.729547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.729630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.729655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.729750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.729777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.729854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.729880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.729966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.729991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.730104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.730129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.730210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.730239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.730318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.730343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.730429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.730456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.730580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.730611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.730710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.730763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.730865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.730901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.642 [2024-11-15 12:48:12.731003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.642 [2024-11-15 12:48:12.731037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.642 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.731166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.731202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.731296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.731323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.731417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.731444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.731553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.731581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.731699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.731738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.731833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.731860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.731939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.731965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.732106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.732133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.732253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.732279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.732365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.732391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.732507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.732533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.732644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.732669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.732762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.732789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.732881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.732906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.732998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.733023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.733111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.733136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.733220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.733246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.733362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.733391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.733482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.733507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.733589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.733617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.733692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.733729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.733855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.733882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.733977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.734016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.734141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.734168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.734252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.734278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.734383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.734409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.734498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.734524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.734634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.734660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.734752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.734781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.734895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.734921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.735010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.735036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.735116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.735148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.735272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.735300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.735414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.735440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.735539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.735565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.735654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.735680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.735802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.735828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.735917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.643 [2024-11-15 12:48:12.735943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.643 qpair failed and we were unable to recover it. 00:26:32.643 [2024-11-15 12:48:12.736023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.736049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.736136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.736162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.736279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.736304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.736375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.736401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.736484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.736511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.736583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.736609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.736695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.736726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.736818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.736844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.736936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.736962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.737045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.737076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.737156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.737181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.737272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.737298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.737386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.737411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.737495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.737521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.737626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.737651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.737732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.737758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.737846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.737871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.737947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.737973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.738078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.738103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.738218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.738243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.738330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.738356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.738476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.738516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.738663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.738692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.738810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.738839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.738950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.738979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.739086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.739112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.739203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.739229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.739315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.739343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.739435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.739460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.739543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.739570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.739662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.739690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.739785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.739812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.739922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.739948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.740035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.740077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.740161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.740187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.740299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.740325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.740403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.740433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.740577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.740611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.644 qpair failed and we were unable to recover it. 00:26:32.644 [2024-11-15 12:48:12.740707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.644 [2024-11-15 12:48:12.740743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.740825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.740850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.740976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.741003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.741090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.741116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.741204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.741235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.741320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.741348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.741439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.741466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.741563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.741601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.741690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.741726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.741815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.741841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.741925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.741951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.742029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.742054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.742196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.742222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.742305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.742334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.742414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.742441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.742525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.742553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.742632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.742657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.742744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.742771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.742862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.742891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.742977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.743002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.743086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.743112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.743202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.743228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.743320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.743348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.743427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.743455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.743581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.743620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.743708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.743741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.743828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.743853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.743936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.743961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.744051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.744077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.744167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.744192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.744276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.744304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.744391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.744416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.744506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.744533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.744620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.744645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.744728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.744754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.744841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.744866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.744941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.744969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.745064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.745089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.645 [2024-11-15 12:48:12.745182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.645 [2024-11-15 12:48:12.745213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.645 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.745306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.745332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.745451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.745492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.745604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.745640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.745751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.745780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.745855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.745889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.745974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.746001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.746102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.746129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.746224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.746256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.746391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.746417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.746497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.746524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.746616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.746647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.746772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.746800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.746920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.746946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.747035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.747061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.747151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.747179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.747271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.747298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.747389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.747416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.747498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.747524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.747606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.747632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.747752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.747791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.747921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.747947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.748034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.748061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.748147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.748174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.748257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.748289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.748392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.748425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.748521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.748548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.748633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.748666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.748762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.748798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.748936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.748973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.749068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.749095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.749183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.749209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.749286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.749311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.749417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.749442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.749529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.749557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.749648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.749675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.749771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.749798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.749874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.749902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.646 [2024-11-15 12:48:12.750049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.646 [2024-11-15 12:48:12.750078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.646 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.750197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.750223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.750338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.750365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.750469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.750496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.750581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.750607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.750724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.750751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.750828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.750853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.750935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.750960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.751051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.751076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.751159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.751184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.751268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.751293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.751381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.751411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.751491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.751520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.751599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.751625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.751709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.751742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.751836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.751865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.751970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.751998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.752085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.752111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.647 Malloc0 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.752204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.752231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.752320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.752347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.752456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.752482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.647 [2024-11-15 12:48:12.752565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.752591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:32.647 [2024-11-15 12:48:12.752668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.752693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.752788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.647 [2024-11-15 12:48:12.752813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.752901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:32.647 [2024-11-15 12:48:12.752927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.753011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.753037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.753123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.753149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.753233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.753259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.753349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.753374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.753457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.753482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.753566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.753592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.753699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.647 [2024-11-15 12:48:12.753729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.647 qpair failed and we were unable to recover it. 00:26:32.647 [2024-11-15 12:48:12.753822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.753848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.753925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.753951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.754028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.754053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.754139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.754165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.754242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.754267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.754378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.754403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.754484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.754509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.754594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.754625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.754713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.754747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.754832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.754864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.754985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.755012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.755106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.755141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.755282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.755317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.755441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.755467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.755551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.755578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.755661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.755686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.755806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.755832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 [2024-11-15 12:48:12.755823] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.755918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.755942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.756019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.756044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.756154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.756181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.756271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.756296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.756378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.756404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.756499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.756533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.756626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.756654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.756735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.756765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.756856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.756885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.756975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.757005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.757100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.757127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.757219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.757245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.757350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.757375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.757460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.757485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.757570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.757595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.757684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.757709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.757815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.757841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.757927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.757954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.758036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.758061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.758189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.758215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.758307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.648 [2024-11-15 12:48:12.758335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.648 qpair failed and we were unable to recover it. 00:26:32.648 [2024-11-15 12:48:12.758418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.758444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.758531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.758565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.758662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.758688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.758778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.758804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.758903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.758928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.759040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.759066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.759146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.759171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.759248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.759273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.759352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.759377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.759461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.759486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.759598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.759623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.759822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.759858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.759951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.759978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.760062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.760088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.760182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.760218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.760438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.760474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.760582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.760618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.760723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.760750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.760861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.760886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.760972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.760997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.761079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.761104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.761219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.761244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.761356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.761382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.761458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.761483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.761571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.761596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.761682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.761708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.761808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.761838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.761960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.761996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.762096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.762125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.762241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.762273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.762365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.762391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.762473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.762500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.762584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.762613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.762703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.762754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.762839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.762866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.762955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.762981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.763060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.763088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.763170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.649 [2024-11-15 12:48:12.763197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.649 qpair failed and we were unable to recover it. 00:26:32.649 [2024-11-15 12:48:12.763305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.763335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.763448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.763474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.763583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.763608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.763729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.763755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.763841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.763866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.763947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.763972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.650 [2024-11-15 12:48:12.764081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.764107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:32.650 [2024-11-15 12:48:12.764192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.764217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.650 [2024-11-15 12:48:12.764305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.764330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:32.650 [2024-11-15 12:48:12.764419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.764447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.764525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.764551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.764665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.764693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.764797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.764824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.764920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.764946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.765033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.765060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.765175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.765202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.765281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.765306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.765422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.765447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.765518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.765544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.765625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.765650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.765761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.765787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.765869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.765894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.765979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.766004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.766080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.766105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.766187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.766212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.766300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.766334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.766413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.766438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.766525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.766554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.766654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.766682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.766794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.766822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.766913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.766944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.767036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.767063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.767146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.767172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.767253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.767279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.767389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.767414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.767491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.767516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.650 [2024-11-15 12:48:12.767595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.650 [2024-11-15 12:48:12.767620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.650 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.767704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.767735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.767820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.767846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.767949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.767975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.768082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.768108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.768192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.768217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.768333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.768359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.768433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.768458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.768536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.768561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.768642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.768668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.768748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.768773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.768857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.768882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.768966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.768992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.769074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.769099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.769184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.769209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.769326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.769355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.769431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.769462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.769552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.769590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.769673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.769699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.769788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.769813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.769893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.769919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.770026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.770052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.770133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.770158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.770229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.770254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.770335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.770360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.770440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.770465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.770575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.770600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.770675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.770700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.770783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.770808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.770919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.770945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.771030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.771056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.771132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.771157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.771231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.771256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.771335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.771360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.771443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.771468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.771548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.771573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.771652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.771678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.651 qpair failed and we were unable to recover it. 00:26:32.651 [2024-11-15 12:48:12.771776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.651 [2024-11-15 12:48:12.771801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.771893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.771919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.772002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.772027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.652 [2024-11-15 12:48:12.772107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.772134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:32.652 [2024-11-15 12:48:12.772216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.772242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.652 [2024-11-15 12:48:12.772329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.772357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:32.652 [2024-11-15 12:48:12.772443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.772470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.772550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.772575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.772650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.772675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.772775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.772801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.772873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.772899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.772976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.773001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.773108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.773133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.773211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.773236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.773315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.773340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.773431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.773456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.773567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.773592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.773677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.773702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.773795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.773826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.773906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.773931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.774015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.774041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.774119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.774144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.774221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.774246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.774324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.774349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.774462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.774487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.774568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.774593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.774699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.774732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.774811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.774836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.774911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.774936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.775045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.775070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.775149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.775174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.775252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.775277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.775367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.652 [2024-11-15 12:48:12.775392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.652 qpair failed and we were unable to recover it. 00:26:32.652 [2024-11-15 12:48:12.775473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.775498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.775582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.775607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.775699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.775742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.775865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.775892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.775973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.776007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.776108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.776134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.776224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.776249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.776326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.776351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.776433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.776458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.776536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.776561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.776645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.776670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.776760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.776786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.776900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.776929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.777014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.777039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.777126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.777151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.777228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.777253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.777326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.777351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.777429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.777454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.777543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.777568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.777657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.777691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.777831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.777870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.777975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.778012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.778139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.778165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.778252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.778277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.778353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.778378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.778459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.778484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.778567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.778592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.778677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.778703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdefa0 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.778907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.778937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.779047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.779074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.779159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.779185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.779294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.779328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.779427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.779456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.779547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.779573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.779689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.779725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.779822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.779848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.779938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 [2024-11-15 12:48:12.779967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.653 [2024-11-15 12:48:12.780066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.653 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.653 [2024-11-15 12:48:12.780093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.653 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.780206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.780232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b9 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:32.654 0 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.780326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.780353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.780441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.780468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:32.654 [2024-11-15 12:48:12.780566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.780592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.780701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.780736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.780822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.780848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.780935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.780962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.781066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.781093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.781177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.781203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.781299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.781326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.781400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.781427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.781520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.781547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.781669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.781695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.781801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.781829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.781910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.781936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.782023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.782050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.782129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.782155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.782270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.782296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.782389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.782420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.782502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.782536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.782652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.782688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea00000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.782797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.782825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.782909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.782939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.783050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.783076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.783155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.783181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.783276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.783303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.783399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.783426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.783504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.783530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.783608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.783633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.783731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.783758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.783837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.654 [2024-11-15 12:48:12.783862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea04000b90 with addr=10.0.0.2, port=4420 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 [2024-11-15 12:48:12.784275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.654 [2024-11-15 12:48:12.786802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.654 [2024-11-15 12:48:12.786926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.654 [2024-11-15 12:48:12.786955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.654 [2024-11-15 12:48:12.786971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.654 [2024-11-15 12:48:12.786983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.654 [2024-11-15 12:48:12.787020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.654 qpair failed and we were unable to recover it. 00:26:32.654 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.654 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:32.654 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.654 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:32.654 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.655 12:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1136027 00:26:32.655 [2024-11-15 12:48:12.796482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.655 [2024-11-15 12:48:12.796593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.655 [2024-11-15 12:48:12.796621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.655 [2024-11-15 12:48:12.796636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.655 [2024-11-15 12:48:12.796658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.655 [2024-11-15 12:48:12.796690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.655 qpair failed and we were unable to recover it. 00:26:32.655 [2024-11-15 12:48:12.806505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.655 [2024-11-15 12:48:12.806599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.655 [2024-11-15 12:48:12.806626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.655 [2024-11-15 12:48:12.806641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.655 [2024-11-15 12:48:12.806654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.655 [2024-11-15 12:48:12.806698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.655 qpair failed and we were unable to recover it. 00:26:32.655 [2024-11-15 12:48:12.816479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.655 [2024-11-15 12:48:12.816571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.655 [2024-11-15 12:48:12.816598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.655 [2024-11-15 12:48:12.816613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.655 [2024-11-15 12:48:12.816625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.655 [2024-11-15 12:48:12.816656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.655 qpair failed and we were unable to recover it. 00:26:32.655 [2024-11-15 12:48:12.826485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.655 [2024-11-15 12:48:12.826577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.655 [2024-11-15 12:48:12.826604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.655 [2024-11-15 12:48:12.826619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.655 [2024-11-15 12:48:12.826631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.655 [2024-11-15 12:48:12.826661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.655 qpair failed and we were unable to recover it. 00:26:32.655 [2024-11-15 12:48:12.836488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.655 [2024-11-15 12:48:12.836624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.655 [2024-11-15 12:48:12.836650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.655 [2024-11-15 12:48:12.836666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.655 [2024-11-15 12:48:12.836678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.655 [2024-11-15 12:48:12.836708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.655 qpair failed and we were unable to recover it. 00:26:32.655 [2024-11-15 12:48:12.846487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.655 [2024-11-15 12:48:12.846567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.655 [2024-11-15 12:48:12.846593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.655 [2024-11-15 12:48:12.846608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.655 [2024-11-15 12:48:12.846620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.655 [2024-11-15 12:48:12.846650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.655 qpair failed and we were unable to recover it. 00:26:32.655 [2024-11-15 12:48:12.856532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.655 [2024-11-15 12:48:12.856621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.655 [2024-11-15 12:48:12.856647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.655 [2024-11-15 12:48:12.856662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.655 [2024-11-15 12:48:12.856675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.655 [2024-11-15 12:48:12.856705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.655 qpair failed and we were unable to recover it. 00:26:32.655 [2024-11-15 12:48:12.866601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.655 [2024-11-15 12:48:12.866729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.655 [2024-11-15 12:48:12.866756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.655 [2024-11-15 12:48:12.866771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.655 [2024-11-15 12:48:12.866784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.655 [2024-11-15 12:48:12.866814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.655 qpair failed and we were unable to recover it. 00:26:32.655 [2024-11-15 12:48:12.876614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.655 [2024-11-15 12:48:12.876713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.655 [2024-11-15 12:48:12.876760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.655 [2024-11-15 12:48:12.876786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.655 [2024-11-15 12:48:12.876801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.655 [2024-11-15 12:48:12.876835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.655 qpair failed and we were unable to recover it. 00:26:32.655 [2024-11-15 12:48:12.886635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.655 [2024-11-15 12:48:12.886724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.655 [2024-11-15 12:48:12.886757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.655 [2024-11-15 12:48:12.886772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.655 [2024-11-15 12:48:12.886785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.655 [2024-11-15 12:48:12.886816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.655 qpair failed and we were unable to recover it. 00:26:32.655 [2024-11-15 12:48:12.896649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.655 [2024-11-15 12:48:12.896744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.655 [2024-11-15 12:48:12.896770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.655 [2024-11-15 12:48:12.896785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.655 [2024-11-15 12:48:12.896798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.655 [2024-11-15 12:48:12.896827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.655 qpair failed and we were unable to recover it. 00:26:32.655 [2024-11-15 12:48:12.906689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.655 [2024-11-15 12:48:12.906787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.655 [2024-11-15 12:48:12.906815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.655 [2024-11-15 12:48:12.906829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.655 [2024-11-15 12:48:12.906841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.655 [2024-11-15 12:48:12.906871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.655 qpair failed and we were unable to recover it. 00:26:32.655 [2024-11-15 12:48:12.916850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.655 [2024-11-15 12:48:12.916938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.655 [2024-11-15 12:48:12.916964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.655 [2024-11-15 12:48:12.916978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.655 [2024-11-15 12:48:12.916990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.655 [2024-11-15 12:48:12.917020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.655 qpair failed and we were unable to recover it. 00:26:32.656 [2024-11-15 12:48:12.926736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.656 [2024-11-15 12:48:12.926819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.656 [2024-11-15 12:48:12.926845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.656 [2024-11-15 12:48:12.926859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.656 [2024-11-15 12:48:12.926879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.656 [2024-11-15 12:48:12.926910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.656 qpair failed and we were unable to recover it. 00:26:32.915 [2024-11-15 12:48:12.936775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.915 [2024-11-15 12:48:12.936871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.915 [2024-11-15 12:48:12.936897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.915 [2024-11-15 12:48:12.936912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.915 [2024-11-15 12:48:12.936924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.915 [2024-11-15 12:48:12.936954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.915 qpair failed and we were unable to recover it. 00:26:32.915 [2024-11-15 12:48:12.946893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.915 [2024-11-15 12:48:12.947031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.915 [2024-11-15 12:48:12.947062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.915 [2024-11-15 12:48:12.947078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.915 [2024-11-15 12:48:12.947091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.915 [2024-11-15 12:48:12.947122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.915 qpair failed and we were unable to recover it. 00:26:32.915 [2024-11-15 12:48:12.956803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.915 [2024-11-15 12:48:12.956889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.915 [2024-11-15 12:48:12.956916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.915 [2024-11-15 12:48:12.956931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.915 [2024-11-15 12:48:12.956944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.915 [2024-11-15 12:48:12.956974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.915 qpair failed and we were unable to recover it. 00:26:32.915 [2024-11-15 12:48:12.966929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.915 [2024-11-15 12:48:12.967030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.915 [2024-11-15 12:48:12.967055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.915 [2024-11-15 12:48:12.967070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.915 [2024-11-15 12:48:12.967082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.915 [2024-11-15 12:48:12.967112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.915 qpair failed and we were unable to recover it. 00:26:32.915 [2024-11-15 12:48:12.976871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.915 [2024-11-15 12:48:12.976957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.915 [2024-11-15 12:48:12.976983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.915 [2024-11-15 12:48:12.976998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.915 [2024-11-15 12:48:12.977010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.915 [2024-11-15 12:48:12.977040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.915 qpair failed and we were unable to recover it. 00:26:32.915 [2024-11-15 12:48:12.986933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.915 [2024-11-15 12:48:12.987019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.915 [2024-11-15 12:48:12.987045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.915 [2024-11-15 12:48:12.987059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.915 [2024-11-15 12:48:12.987071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.915 [2024-11-15 12:48:12.987102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.915 qpair failed and we were unable to recover it. 00:26:32.915 [2024-11-15 12:48:12.996921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.915 [2024-11-15 12:48:12.997006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.915 [2024-11-15 12:48:12.997032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.915 [2024-11-15 12:48:12.997046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.915 [2024-11-15 12:48:12.997058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.915 [2024-11-15 12:48:12.997088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.915 qpair failed and we were unable to recover it. 00:26:32.915 [2024-11-15 12:48:13.007026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.915 [2024-11-15 12:48:13.007115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.915 [2024-11-15 12:48:13.007140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.915 [2024-11-15 12:48:13.007155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.915 [2024-11-15 12:48:13.007167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.915 [2024-11-15 12:48:13.007198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.915 qpair failed and we were unable to recover it. 00:26:32.915 [2024-11-15 12:48:13.017083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.915 [2024-11-15 12:48:13.017173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.915 [2024-11-15 12:48:13.017204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.915 [2024-11-15 12:48:13.017219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.915 [2024-11-15 12:48:13.017232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.915 [2024-11-15 12:48:13.017261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.915 qpair failed and we were unable to recover it. 00:26:32.915 [2024-11-15 12:48:13.027010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.915 [2024-11-15 12:48:13.027094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.915 [2024-11-15 12:48:13.027119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.915 [2024-11-15 12:48:13.027134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.915 [2024-11-15 12:48:13.027146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.915 [2024-11-15 12:48:13.027176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.915 qpair failed and we were unable to recover it. 00:26:32.915 [2024-11-15 12:48:13.037061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.915 [2024-11-15 12:48:13.037148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.915 [2024-11-15 12:48:13.037173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.916 [2024-11-15 12:48:13.037188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.916 [2024-11-15 12:48:13.037201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.916 [2024-11-15 12:48:13.037231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.916 qpair failed and we were unable to recover it. 00:26:32.916 [2024-11-15 12:48:13.047069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.916 [2024-11-15 12:48:13.047151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.916 [2024-11-15 12:48:13.047177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.916 [2024-11-15 12:48:13.047191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.916 [2024-11-15 12:48:13.047204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.916 [2024-11-15 12:48:13.047250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.916 qpair failed and we were unable to recover it. 00:26:32.916 [2024-11-15 12:48:13.057180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.916 [2024-11-15 12:48:13.057265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.916 [2024-11-15 12:48:13.057290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.916 [2024-11-15 12:48:13.057310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.916 [2024-11-15 12:48:13.057323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.916 [2024-11-15 12:48:13.057354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.916 qpair failed and we were unable to recover it. 00:26:32.916 [2024-11-15 12:48:13.067131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.916 [2024-11-15 12:48:13.067241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.916 [2024-11-15 12:48:13.067267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.916 [2024-11-15 12:48:13.067281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.916 [2024-11-15 12:48:13.067294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.916 [2024-11-15 12:48:13.067324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.916 qpair failed and we were unable to recover it. 00:26:32.916 [2024-11-15 12:48:13.077144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.916 [2024-11-15 12:48:13.077225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.916 [2024-11-15 12:48:13.077250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.916 [2024-11-15 12:48:13.077264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.916 [2024-11-15 12:48:13.077277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.916 [2024-11-15 12:48:13.077307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.916 qpair failed and we were unable to recover it. 00:26:32.916 [2024-11-15 12:48:13.087199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.916 [2024-11-15 12:48:13.087283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.916 [2024-11-15 12:48:13.087310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.916 [2024-11-15 12:48:13.087324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.916 [2024-11-15 12:48:13.087337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.916 [2024-11-15 12:48:13.087380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.916 qpair failed and we were unable to recover it. 00:26:32.916 [2024-11-15 12:48:13.097259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.916 [2024-11-15 12:48:13.097346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.916 [2024-11-15 12:48:13.097372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.916 [2024-11-15 12:48:13.097386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.916 [2024-11-15 12:48:13.097398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.916 [2024-11-15 12:48:13.097428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.916 qpair failed and we were unable to recover it. 00:26:32.916 [2024-11-15 12:48:13.107228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.916 [2024-11-15 12:48:13.107312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.916 [2024-11-15 12:48:13.107338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.916 [2024-11-15 12:48:13.107353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.916 [2024-11-15 12:48:13.107366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.916 [2024-11-15 12:48:13.107396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.916 qpair failed and we were unable to recover it. 00:26:32.916 [2024-11-15 12:48:13.117285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.916 [2024-11-15 12:48:13.117375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.916 [2024-11-15 12:48:13.117401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.916 [2024-11-15 12:48:13.117416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.916 [2024-11-15 12:48:13.117428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.916 [2024-11-15 12:48:13.117459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.916 qpair failed and we were unable to recover it. 00:26:32.916 [2024-11-15 12:48:13.127283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.916 [2024-11-15 12:48:13.127375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.916 [2024-11-15 12:48:13.127407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.916 [2024-11-15 12:48:13.127426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.916 [2024-11-15 12:48:13.127440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.916 [2024-11-15 12:48:13.127471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.916 qpair failed and we were unable to recover it. 00:26:32.916 [2024-11-15 12:48:13.137364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.916 [2024-11-15 12:48:13.137468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.916 [2024-11-15 12:48:13.137495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.916 [2024-11-15 12:48:13.137510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.916 [2024-11-15 12:48:13.137523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.916 [2024-11-15 12:48:13.137553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.916 qpair failed and we were unable to recover it. 00:26:32.916 [2024-11-15 12:48:13.147371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.916 [2024-11-15 12:48:13.147497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.916 [2024-11-15 12:48:13.147523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.916 [2024-11-15 12:48:13.147537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.916 [2024-11-15 12:48:13.147550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.916 [2024-11-15 12:48:13.147579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.916 qpair failed and we were unable to recover it. 00:26:32.916 [2024-11-15 12:48:13.157387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.916 [2024-11-15 12:48:13.157467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.916 [2024-11-15 12:48:13.157493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.916 [2024-11-15 12:48:13.157508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.916 [2024-11-15 12:48:13.157520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.916 [2024-11-15 12:48:13.157562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.916 qpair failed and we were unable to recover it. 00:26:32.916 [2024-11-15 12:48:13.167413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.916 [2024-11-15 12:48:13.167495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.916 [2024-11-15 12:48:13.167522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.916 [2024-11-15 12:48:13.167537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.917 [2024-11-15 12:48:13.167549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.917 [2024-11-15 12:48:13.167591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.917 qpair failed and we were unable to recover it. 00:26:32.917 [2024-11-15 12:48:13.177432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.917 [2024-11-15 12:48:13.177522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.917 [2024-11-15 12:48:13.177549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.917 [2024-11-15 12:48:13.177564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.917 [2024-11-15 12:48:13.177576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.917 [2024-11-15 12:48:13.177606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.917 qpair failed and we were unable to recover it. 00:26:32.917 [2024-11-15 12:48:13.187447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.917 [2024-11-15 12:48:13.187568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.917 [2024-11-15 12:48:13.187594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.917 [2024-11-15 12:48:13.187614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.917 [2024-11-15 12:48:13.187628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.917 [2024-11-15 12:48:13.187658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.917 qpair failed and we were unable to recover it. 00:26:32.917 [2024-11-15 12:48:13.197479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.917 [2024-11-15 12:48:13.197568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.917 [2024-11-15 12:48:13.197594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.917 [2024-11-15 12:48:13.197609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.917 [2024-11-15 12:48:13.197622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.917 [2024-11-15 12:48:13.197651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.917 qpair failed and we were unable to recover it. 00:26:32.917 [2024-11-15 12:48:13.207598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.917 [2024-11-15 12:48:13.207682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.917 [2024-11-15 12:48:13.207708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.917 [2024-11-15 12:48:13.207732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.917 [2024-11-15 12:48:13.207747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.917 [2024-11-15 12:48:13.207777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.917 qpair failed and we were unable to recover it. 00:26:32.917 [2024-11-15 12:48:13.217588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.917 [2024-11-15 12:48:13.217674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.917 [2024-11-15 12:48:13.217699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.917 [2024-11-15 12:48:13.217714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.917 [2024-11-15 12:48:13.217734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.917 [2024-11-15 12:48:13.217765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.917 qpair failed and we were unable to recover it. 00:26:32.917 [2024-11-15 12:48:13.227606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.917 [2024-11-15 12:48:13.227698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.917 [2024-11-15 12:48:13.227733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.917 [2024-11-15 12:48:13.227750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.917 [2024-11-15 12:48:13.227762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.917 [2024-11-15 12:48:13.227798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.917 qpair failed and we were unable to recover it. 00:26:32.917 [2024-11-15 12:48:13.237763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.917 [2024-11-15 12:48:13.237848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.917 [2024-11-15 12:48:13.237874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.917 [2024-11-15 12:48:13.237889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.917 [2024-11-15 12:48:13.237901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.917 [2024-11-15 12:48:13.237930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.917 qpair failed and we were unable to recover it. 00:26:32.917 [2024-11-15 12:48:13.247607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.917 [2024-11-15 12:48:13.247685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.917 [2024-11-15 12:48:13.247711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.917 [2024-11-15 12:48:13.247735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.917 [2024-11-15 12:48:13.247748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:32.917 [2024-11-15 12:48:13.247778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:32.917 qpair failed and we were unable to recover it. 00:26:33.176 [2024-11-15 12:48:13.257669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.176 [2024-11-15 12:48:13.257784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.176 [2024-11-15 12:48:13.257810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.176 [2024-11-15 12:48:13.257825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.176 [2024-11-15 12:48:13.257837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.176 [2024-11-15 12:48:13.257867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.176 qpair failed and we were unable to recover it. 00:26:33.176 [2024-11-15 12:48:13.267711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.176 [2024-11-15 12:48:13.267803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.176 [2024-11-15 12:48:13.267829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.176 [2024-11-15 12:48:13.267844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.176 [2024-11-15 12:48:13.267856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.176 [2024-11-15 12:48:13.267886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.176 qpair failed and we were unable to recover it. 00:26:33.176 [2024-11-15 12:48:13.277811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.176 [2024-11-15 12:48:13.277904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.176 [2024-11-15 12:48:13.277930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.176 [2024-11-15 12:48:13.277945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.176 [2024-11-15 12:48:13.277957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.176 [2024-11-15 12:48:13.277986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.176 qpair failed and we were unable to recover it. 00:26:33.176 [2024-11-15 12:48:13.287742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.176 [2024-11-15 12:48:13.287826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.176 [2024-11-15 12:48:13.287852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.176 [2024-11-15 12:48:13.287866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.176 [2024-11-15 12:48:13.287878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.176 [2024-11-15 12:48:13.287908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.176 qpair failed and we were unable to recover it. 00:26:33.176 [2024-11-15 12:48:13.297816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.176 [2024-11-15 12:48:13.297916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.176 [2024-11-15 12:48:13.297944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.176 [2024-11-15 12:48:13.297961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.176 [2024-11-15 12:48:13.297974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.177 [2024-11-15 12:48:13.298005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.177 qpair failed and we were unable to recover it. 00:26:33.177 [2024-11-15 12:48:13.307791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.177 [2024-11-15 12:48:13.307880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.177 [2024-11-15 12:48:13.307906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.177 [2024-11-15 12:48:13.307921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.177 [2024-11-15 12:48:13.307933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.177 [2024-11-15 12:48:13.307963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.177 qpair failed and we were unable to recover it. 00:26:33.177 [2024-11-15 12:48:13.317846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.177 [2024-11-15 12:48:13.317926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.177 [2024-11-15 12:48:13.317957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.177 [2024-11-15 12:48:13.317973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.177 [2024-11-15 12:48:13.317984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.177 [2024-11-15 12:48:13.318015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.177 qpair failed and we were unable to recover it. 00:26:33.177 [2024-11-15 12:48:13.327873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.177 [2024-11-15 12:48:13.327974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.177 [2024-11-15 12:48:13.328000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.177 [2024-11-15 12:48:13.328014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.177 [2024-11-15 12:48:13.328026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.177 [2024-11-15 12:48:13.328055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.177 qpair failed and we were unable to recover it. 00:26:33.177 [2024-11-15 12:48:13.337919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.177 [2024-11-15 12:48:13.338009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.177 [2024-11-15 12:48:13.338035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.177 [2024-11-15 12:48:13.338049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.177 [2024-11-15 12:48:13.338061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.177 [2024-11-15 12:48:13.338091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.177 qpair failed and we were unable to recover it. 00:26:33.177 [2024-11-15 12:48:13.347895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.177 [2024-11-15 12:48:13.347976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.177 [2024-11-15 12:48:13.348001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.177 [2024-11-15 12:48:13.348015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.177 [2024-11-15 12:48:13.348028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.177 [2024-11-15 12:48:13.348057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.177 qpair failed and we were unable to recover it. 00:26:33.177 [2024-11-15 12:48:13.357959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.177 [2024-11-15 12:48:13.358075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.177 [2024-11-15 12:48:13.358101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.177 [2024-11-15 12:48:13.358115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.177 [2024-11-15 12:48:13.358133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.177 [2024-11-15 12:48:13.358164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.177 qpair failed and we were unable to recover it. 00:26:33.177 [2024-11-15 12:48:13.367964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.177 [2024-11-15 12:48:13.368044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.177 [2024-11-15 12:48:13.368069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.177 [2024-11-15 12:48:13.368083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.177 [2024-11-15 12:48:13.368095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.177 [2024-11-15 12:48:13.368125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.177 qpair failed and we were unable to recover it. 00:26:33.177 [2024-11-15 12:48:13.377991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.177 [2024-11-15 12:48:13.378087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.177 [2024-11-15 12:48:13.378120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.177 [2024-11-15 12:48:13.378138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.177 [2024-11-15 12:48:13.378151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.177 [2024-11-15 12:48:13.378182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.177 qpair failed and we were unable to recover it. 00:26:33.177 [2024-11-15 12:48:13.388060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.177 [2024-11-15 12:48:13.388149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.177 [2024-11-15 12:48:13.388176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.177 [2024-11-15 12:48:13.388191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.177 [2024-11-15 12:48:13.388203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.177 [2024-11-15 12:48:13.388234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.177 qpair failed and we were unable to recover it. 00:26:33.177 [2024-11-15 12:48:13.398058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.177 [2024-11-15 12:48:13.398146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.177 [2024-11-15 12:48:13.398172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.177 [2024-11-15 12:48:13.398187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.177 [2024-11-15 12:48:13.398199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.177 [2024-11-15 12:48:13.398229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.177 qpair failed and we were unable to recover it. 00:26:33.177 [2024-11-15 12:48:13.408066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.177 [2024-11-15 12:48:13.408150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.177 [2024-11-15 12:48:13.408176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.177 [2024-11-15 12:48:13.408191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.177 [2024-11-15 12:48:13.408204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.177 [2024-11-15 12:48:13.408234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.177 qpair failed and we were unable to recover it. 00:26:33.177 [2024-11-15 12:48:13.418151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.177 [2024-11-15 12:48:13.418286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.177 [2024-11-15 12:48:13.418310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.177 [2024-11-15 12:48:13.418324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.177 [2024-11-15 12:48:13.418337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.177 [2024-11-15 12:48:13.418367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.177 qpair failed and we were unable to recover it. 00:26:33.177 [2024-11-15 12:48:13.428176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.177 [2024-11-15 12:48:13.428262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.177 [2024-11-15 12:48:13.428288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.177 [2024-11-15 12:48:13.428302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.177 [2024-11-15 12:48:13.428314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.177 [2024-11-15 12:48:13.428344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.177 qpair failed and we were unable to recover it. 00:26:33.178 [2024-11-15 12:48:13.438209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.178 [2024-11-15 12:48:13.438302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.178 [2024-11-15 12:48:13.438328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.178 [2024-11-15 12:48:13.438342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.178 [2024-11-15 12:48:13.438354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.178 [2024-11-15 12:48:13.438384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.178 qpair failed and we were unable to recover it. 00:26:33.178 [2024-11-15 12:48:13.448199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.178 [2024-11-15 12:48:13.448281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.178 [2024-11-15 12:48:13.448312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.178 [2024-11-15 12:48:13.448328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.178 [2024-11-15 12:48:13.448340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.178 [2024-11-15 12:48:13.448371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.178 qpair failed and we were unable to recover it. 00:26:33.178 [2024-11-15 12:48:13.458272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.178 [2024-11-15 12:48:13.458374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.178 [2024-11-15 12:48:13.458399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.178 [2024-11-15 12:48:13.458413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.178 [2024-11-15 12:48:13.458426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.178 [2024-11-15 12:48:13.458455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.178 qpair failed and we were unable to recover it. 00:26:33.178 [2024-11-15 12:48:13.468262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.178 [2024-11-15 12:48:13.468344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.178 [2024-11-15 12:48:13.468370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.178 [2024-11-15 12:48:13.468385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.178 [2024-11-15 12:48:13.468397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.178 [2024-11-15 12:48:13.468439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.178 qpair failed and we were unable to recover it. 00:26:33.178 [2024-11-15 12:48:13.478250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.178 [2024-11-15 12:48:13.478330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.178 [2024-11-15 12:48:13.478356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.178 [2024-11-15 12:48:13.478371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.178 [2024-11-15 12:48:13.478384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.178 [2024-11-15 12:48:13.478414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.178 qpair failed and we were unable to recover it. 00:26:33.178 [2024-11-15 12:48:13.488312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.178 [2024-11-15 12:48:13.488395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.178 [2024-11-15 12:48:13.488421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.178 [2024-11-15 12:48:13.488436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.178 [2024-11-15 12:48:13.488456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.178 [2024-11-15 12:48:13.488486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.178 qpair failed and we were unable to recover it. 00:26:33.178 [2024-11-15 12:48:13.498350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.178 [2024-11-15 12:48:13.498481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.178 [2024-11-15 12:48:13.498506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.178 [2024-11-15 12:48:13.498521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.178 [2024-11-15 12:48:13.498532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.178 [2024-11-15 12:48:13.498562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.178 qpair failed and we were unable to recover it. 00:26:33.178 [2024-11-15 12:48:13.508463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.178 [2024-11-15 12:48:13.508545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.178 [2024-11-15 12:48:13.508571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.178 [2024-11-15 12:48:13.508586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.178 [2024-11-15 12:48:13.508598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.178 [2024-11-15 12:48:13.508628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.178 qpair failed and we were unable to recover it. 00:26:33.437 [2024-11-15 12:48:13.518454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.437 [2024-11-15 12:48:13.518597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.437 [2024-11-15 12:48:13.518623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.437 [2024-11-15 12:48:13.518638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.437 [2024-11-15 12:48:13.518650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.437 [2024-11-15 12:48:13.518680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.437 qpair failed and we were unable to recover it. 00:26:33.437 [2024-11-15 12:48:13.528453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.437 [2024-11-15 12:48:13.528577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.437 [2024-11-15 12:48:13.528604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.437 [2024-11-15 12:48:13.528618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.437 [2024-11-15 12:48:13.528630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.437 [2024-11-15 12:48:13.528660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.437 qpair failed and we were unable to recover it. 00:26:33.437 [2024-11-15 12:48:13.538473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.437 [2024-11-15 12:48:13.538564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.437 [2024-11-15 12:48:13.538590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.437 [2024-11-15 12:48:13.538604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.437 [2024-11-15 12:48:13.538616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.437 [2024-11-15 12:48:13.538647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.437 qpair failed and we were unable to recover it. 00:26:33.437 [2024-11-15 12:48:13.548517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.437 [2024-11-15 12:48:13.548650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.437 [2024-11-15 12:48:13.548676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.437 [2024-11-15 12:48:13.548691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.437 [2024-11-15 12:48:13.548703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.437 [2024-11-15 12:48:13.548741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.437 qpair failed and we were unable to recover it. 00:26:33.437 [2024-11-15 12:48:13.558489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.437 [2024-11-15 12:48:13.558579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.437 [2024-11-15 12:48:13.558604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.437 [2024-11-15 12:48:13.558619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.437 [2024-11-15 12:48:13.558631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.437 [2024-11-15 12:48:13.558661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.437 qpair failed and we were unable to recover it. 00:26:33.437 [2024-11-15 12:48:13.568831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.437 [2024-11-15 12:48:13.568944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.437 [2024-11-15 12:48:13.568970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.437 [2024-11-15 12:48:13.568985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.437 [2024-11-15 12:48:13.568997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.437 [2024-11-15 12:48:13.569027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.437 qpair failed and we were unable to recover it. 00:26:33.437 [2024-11-15 12:48:13.578640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.437 [2024-11-15 12:48:13.578758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.437 [2024-11-15 12:48:13.578791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.437 [2024-11-15 12:48:13.578807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.437 [2024-11-15 12:48:13.578820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.437 [2024-11-15 12:48:13.578849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.437 qpair failed and we were unable to recover it. 00:26:33.437 [2024-11-15 12:48:13.588744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.437 [2024-11-15 12:48:13.588830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.437 [2024-11-15 12:48:13.588856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.437 [2024-11-15 12:48:13.588871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.437 [2024-11-15 12:48:13.588883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.437 [2024-11-15 12:48:13.588913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.437 qpair failed and we were unable to recover it. 00:26:33.437 [2024-11-15 12:48:13.598675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.437 [2024-11-15 12:48:13.598781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.437 [2024-11-15 12:48:13.598808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.437 [2024-11-15 12:48:13.598823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.437 [2024-11-15 12:48:13.598835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.437 [2024-11-15 12:48:13.598867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.437 qpair failed and we were unable to recover it. 00:26:33.437 [2024-11-15 12:48:13.608670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.437 [2024-11-15 12:48:13.608771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.437 [2024-11-15 12:48:13.608799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.437 [2024-11-15 12:48:13.608818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.437 [2024-11-15 12:48:13.608831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.437 [2024-11-15 12:48:13.608862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.437 qpair failed and we were unable to recover it. 00:26:33.438 [2024-11-15 12:48:13.618705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.438 [2024-11-15 12:48:13.618823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.438 [2024-11-15 12:48:13.618849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.438 [2024-11-15 12:48:13.618869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.438 [2024-11-15 12:48:13.618882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.438 [2024-11-15 12:48:13.618913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.438 qpair failed and we were unable to recover it. 00:26:33.438 [2024-11-15 12:48:13.628767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.438 [2024-11-15 12:48:13.628865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.438 [2024-11-15 12:48:13.628898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.438 [2024-11-15 12:48:13.628915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.438 [2024-11-15 12:48:13.628928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.438 [2024-11-15 12:48:13.628960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.438 qpair failed and we were unable to recover it. 00:26:33.438 [2024-11-15 12:48:13.638784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.438 [2024-11-15 12:48:13.638869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.438 [2024-11-15 12:48:13.638896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.438 [2024-11-15 12:48:13.638912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.438 [2024-11-15 12:48:13.638924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.438 [2024-11-15 12:48:13.638955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.438 qpair failed and we were unable to recover it. 00:26:33.438 [2024-11-15 12:48:13.648784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.438 [2024-11-15 12:48:13.648866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.438 [2024-11-15 12:48:13.648893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.438 [2024-11-15 12:48:13.648907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.438 [2024-11-15 12:48:13.648920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.438 [2024-11-15 12:48:13.648951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.438 qpair failed and we were unable to recover it. 00:26:33.438 [2024-11-15 12:48:13.658880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.438 [2024-11-15 12:48:13.658974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.438 [2024-11-15 12:48:13.659001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.438 [2024-11-15 12:48:13.659016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.438 [2024-11-15 12:48:13.659029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.438 [2024-11-15 12:48:13.659065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.438 qpair failed and we were unable to recover it. 00:26:33.438 [2024-11-15 12:48:13.668851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.438 [2024-11-15 12:48:13.668939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.438 [2024-11-15 12:48:13.668966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.438 [2024-11-15 12:48:13.668982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.438 [2024-11-15 12:48:13.668995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.438 [2024-11-15 12:48:13.669025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.438 qpair failed and we were unable to recover it. 00:26:33.438 [2024-11-15 12:48:13.678897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.438 [2024-11-15 12:48:13.678981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.438 [2024-11-15 12:48:13.679007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.438 [2024-11-15 12:48:13.679022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.438 [2024-11-15 12:48:13.679034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.438 [2024-11-15 12:48:13.679064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.438 qpair failed and we were unable to recover it. 00:26:33.438 [2024-11-15 12:48:13.688929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.438 [2024-11-15 12:48:13.689013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.438 [2024-11-15 12:48:13.689039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.438 [2024-11-15 12:48:13.689054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.438 [2024-11-15 12:48:13.689066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.438 [2024-11-15 12:48:13.689099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.438 qpair failed and we were unable to recover it. 00:26:33.438 [2024-11-15 12:48:13.698945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.438 [2024-11-15 12:48:13.699041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.438 [2024-11-15 12:48:13.699067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.438 [2024-11-15 12:48:13.699082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.438 [2024-11-15 12:48:13.699094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.438 [2024-11-15 12:48:13.699124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.438 qpair failed and we were unable to recover it. 00:26:33.438 [2024-11-15 12:48:13.708984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.438 [2024-11-15 12:48:13.709087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.438 [2024-11-15 12:48:13.709113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.438 [2024-11-15 12:48:13.709128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.438 [2024-11-15 12:48:13.709140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.438 [2024-11-15 12:48:13.709170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.438 qpair failed and we were unable to recover it. 00:26:33.438 [2024-11-15 12:48:13.718991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.438 [2024-11-15 12:48:13.719085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.438 [2024-11-15 12:48:13.719111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.438 [2024-11-15 12:48:13.719126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.438 [2024-11-15 12:48:13.719139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.438 [2024-11-15 12:48:13.719169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.438 qpair failed and we were unable to recover it. 00:26:33.438 [2024-11-15 12:48:13.729046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.438 [2024-11-15 12:48:13.729157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.438 [2024-11-15 12:48:13.729185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.438 [2024-11-15 12:48:13.729200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.438 [2024-11-15 12:48:13.729212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.438 [2024-11-15 12:48:13.729243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.438 qpair failed and we were unable to recover it. 00:26:33.438 [2024-11-15 12:48:13.739044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.439 [2024-11-15 12:48:13.739133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.439 [2024-11-15 12:48:13.739159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.439 [2024-11-15 12:48:13.739174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.439 [2024-11-15 12:48:13.739187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.439 [2024-11-15 12:48:13.739217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.439 qpair failed and we were unable to recover it. 00:26:33.439 [2024-11-15 12:48:13.749105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.439 [2024-11-15 12:48:13.749194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.439 [2024-11-15 12:48:13.749220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.439 [2024-11-15 12:48:13.749240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.439 [2024-11-15 12:48:13.749253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.439 [2024-11-15 12:48:13.749283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.439 qpair failed and we were unable to recover it. 00:26:33.439 [2024-11-15 12:48:13.759093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.439 [2024-11-15 12:48:13.759185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.439 [2024-11-15 12:48:13.759211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.439 [2024-11-15 12:48:13.759226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.439 [2024-11-15 12:48:13.759238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.439 [2024-11-15 12:48:13.759268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.439 qpair failed and we were unable to recover it. 00:26:33.439 [2024-11-15 12:48:13.769119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.439 [2024-11-15 12:48:13.769201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.439 [2024-11-15 12:48:13.769227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.439 [2024-11-15 12:48:13.769242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.439 [2024-11-15 12:48:13.769254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.439 [2024-11-15 12:48:13.769284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.439 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-15 12:48:13.779186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.698 [2024-11-15 12:48:13.779279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.698 [2024-11-15 12:48:13.779305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.698 [2024-11-15 12:48:13.779320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.698 [2024-11-15 12:48:13.779333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.698 [2024-11-15 12:48:13.779363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-15 12:48:13.789199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.698 [2024-11-15 12:48:13.789278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.698 [2024-11-15 12:48:13.789304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.698 [2024-11-15 12:48:13.789318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.698 [2024-11-15 12:48:13.789330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.698 [2024-11-15 12:48:13.789365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-15 12:48:13.799241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.698 [2024-11-15 12:48:13.799331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.698 [2024-11-15 12:48:13.799357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.698 [2024-11-15 12:48:13.799372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.698 [2024-11-15 12:48:13.799384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.698 [2024-11-15 12:48:13.799413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-15 12:48:13.809249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.698 [2024-11-15 12:48:13.809368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.698 [2024-11-15 12:48:13.809394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.698 [2024-11-15 12:48:13.809408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.698 [2024-11-15 12:48:13.809420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.698 [2024-11-15 12:48:13.809450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-15 12:48:13.819264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.698 [2024-11-15 12:48:13.819358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.698 [2024-11-15 12:48:13.819384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.698 [2024-11-15 12:48:13.819398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.698 [2024-11-15 12:48:13.819410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.698 [2024-11-15 12:48:13.819440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-15 12:48:13.829308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.698 [2024-11-15 12:48:13.829394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.698 [2024-11-15 12:48:13.829419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.698 [2024-11-15 12:48:13.829434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.698 [2024-11-15 12:48:13.829447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.698 [2024-11-15 12:48:13.829476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-15 12:48:13.839311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.698 [2024-11-15 12:48:13.839394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.698 [2024-11-15 12:48:13.839419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.698 [2024-11-15 12:48:13.839433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.698 [2024-11-15 12:48:13.839446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.698 [2024-11-15 12:48:13.839476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-15 12:48:13.849511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.698 [2024-11-15 12:48:13.849635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.698 [2024-11-15 12:48:13.849661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.698 [2024-11-15 12:48:13.849676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.698 [2024-11-15 12:48:13.849688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.698 [2024-11-15 12:48:13.849725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-15 12:48:13.859415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.698 [2024-11-15 12:48:13.859500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.698 [2024-11-15 12:48:13.859526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.698 [2024-11-15 12:48:13.859540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.698 [2024-11-15 12:48:13.859552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.698 [2024-11-15 12:48:13.859582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-15 12:48:13.869419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.698 [2024-11-15 12:48:13.869512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.698 [2024-11-15 12:48:13.869538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.698 [2024-11-15 12:48:13.869552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.698 [2024-11-15 12:48:13.869564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.698 [2024-11-15 12:48:13.869594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-15 12:48:13.879465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.698 [2024-11-15 12:48:13.879596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.698 [2024-11-15 12:48:13.879628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.698 [2024-11-15 12:48:13.879644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.698 [2024-11-15 12:48:13.879656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.698 [2024-11-15 12:48:13.879687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-15 12:48:13.889475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.698 [2024-11-15 12:48:13.889568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.698 [2024-11-15 12:48:13.889596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.698 [2024-11-15 12:48:13.889611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.698 [2024-11-15 12:48:13.889623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.699 [2024-11-15 12:48:13.889654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-15 12:48:13.899508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.699 [2024-11-15 12:48:13.899598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.699 [2024-11-15 12:48:13.899625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.699 [2024-11-15 12:48:13.899640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.699 [2024-11-15 12:48:13.899652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.699 [2024-11-15 12:48:13.899682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-15 12:48:13.909515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.699 [2024-11-15 12:48:13.909601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.699 [2024-11-15 12:48:13.909627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.699 [2024-11-15 12:48:13.909642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.699 [2024-11-15 12:48:13.909654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.699 [2024-11-15 12:48:13.909684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-15 12:48:13.919532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.699 [2024-11-15 12:48:13.919613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.699 [2024-11-15 12:48:13.919639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.699 [2024-11-15 12:48:13.919653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.699 [2024-11-15 12:48:13.919671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.699 [2024-11-15 12:48:13.919702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-15 12:48:13.929548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.699 [2024-11-15 12:48:13.929630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.699 [2024-11-15 12:48:13.929655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.699 [2024-11-15 12:48:13.929670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.699 [2024-11-15 12:48:13.929682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.699 [2024-11-15 12:48:13.929712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-15 12:48:13.939604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.699 [2024-11-15 12:48:13.939695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.699 [2024-11-15 12:48:13.939729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.699 [2024-11-15 12:48:13.939745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.699 [2024-11-15 12:48:13.939758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.699 [2024-11-15 12:48:13.939788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-15 12:48:13.949664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.699 [2024-11-15 12:48:13.949751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.699 [2024-11-15 12:48:13.949778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.699 [2024-11-15 12:48:13.949792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.699 [2024-11-15 12:48:13.949804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.699 [2024-11-15 12:48:13.949835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-15 12:48:13.959637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.699 [2024-11-15 12:48:13.959730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.699 [2024-11-15 12:48:13.959756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.699 [2024-11-15 12:48:13.959770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.699 [2024-11-15 12:48:13.959782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.699 [2024-11-15 12:48:13.959813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-15 12:48:13.969703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.699 [2024-11-15 12:48:13.969810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.699 [2024-11-15 12:48:13.969837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.699 [2024-11-15 12:48:13.969851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.699 [2024-11-15 12:48:13.969864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.699 [2024-11-15 12:48:13.969906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-15 12:48:13.979754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.699 [2024-11-15 12:48:13.979845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.699 [2024-11-15 12:48:13.979872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.699 [2024-11-15 12:48:13.979886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.699 [2024-11-15 12:48:13.979898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.699 [2024-11-15 12:48:13.979928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-15 12:48:13.989859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.699 [2024-11-15 12:48:13.989996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.699 [2024-11-15 12:48:13.990022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.699 [2024-11-15 12:48:13.990037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.699 [2024-11-15 12:48:13.990049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.699 [2024-11-15 12:48:13.990079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-15 12:48:13.999778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.699 [2024-11-15 12:48:13.999859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.699 [2024-11-15 12:48:13.999885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.699 [2024-11-15 12:48:13.999900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.699 [2024-11-15 12:48:13.999912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.699 [2024-11-15 12:48:13.999942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-15 12:48:14.009802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.699 [2024-11-15 12:48:14.009892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.699 [2024-11-15 12:48:14.009923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.700 [2024-11-15 12:48:14.009940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.700 [2024-11-15 12:48:14.009952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.700 [2024-11-15 12:48:14.009983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-15 12:48:14.019872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.700 [2024-11-15 12:48:14.019991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.700 [2024-11-15 12:48:14.020017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.700 [2024-11-15 12:48:14.020032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.700 [2024-11-15 12:48:14.020045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.700 [2024-11-15 12:48:14.020074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-15 12:48:14.029884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.700 [2024-11-15 12:48:14.029972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.700 [2024-11-15 12:48:14.029999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.700 [2024-11-15 12:48:14.030014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.700 [2024-11-15 12:48:14.030026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.700 [2024-11-15 12:48:14.030069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.959 [2024-11-15 12:48:14.039888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.959 [2024-11-15 12:48:14.039978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.959 [2024-11-15 12:48:14.040004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.959 [2024-11-15 12:48:14.040018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.959 [2024-11-15 12:48:14.040031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.959 [2024-11-15 12:48:14.040060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.959 qpair failed and we were unable to recover it. 00:26:33.959 [2024-11-15 12:48:14.050036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.959 [2024-11-15 12:48:14.050117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.959 [2024-11-15 12:48:14.050143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.959 [2024-11-15 12:48:14.050158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.959 [2024-11-15 12:48:14.050177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.959 [2024-11-15 12:48:14.050207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.959 qpair failed and we were unable to recover it. 00:26:33.959 [2024-11-15 12:48:14.060036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.959 [2024-11-15 12:48:14.060143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.959 [2024-11-15 12:48:14.060172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.959 [2024-11-15 12:48:14.060187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.959 [2024-11-15 12:48:14.060200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.959 [2024-11-15 12:48:14.060230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.959 qpair failed and we were unable to recover it. 00:26:33.959 [2024-11-15 12:48:14.069978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.959 [2024-11-15 12:48:14.070064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.959 [2024-11-15 12:48:14.070089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.959 [2024-11-15 12:48:14.070103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.959 [2024-11-15 12:48:14.070115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.959 [2024-11-15 12:48:14.070145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.959 qpair failed and we were unable to recover it. 00:26:33.959 [2024-11-15 12:48:14.080031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.959 [2024-11-15 12:48:14.080110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.959 [2024-11-15 12:48:14.080135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.959 [2024-11-15 12:48:14.080150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.959 [2024-11-15 12:48:14.080162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.959 [2024-11-15 12:48:14.080192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.959 qpair failed and we were unable to recover it. 00:26:33.959 [2024-11-15 12:48:14.090038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.959 [2024-11-15 12:48:14.090120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.959 [2024-11-15 12:48:14.090146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.959 [2024-11-15 12:48:14.090161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.959 [2024-11-15 12:48:14.090173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.959 [2024-11-15 12:48:14.090204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.959 qpair failed and we were unable to recover it. 00:26:33.959 [2024-11-15 12:48:14.100151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.959 [2024-11-15 12:48:14.100244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.959 [2024-11-15 12:48:14.100269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.959 [2024-11-15 12:48:14.100284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.959 [2024-11-15 12:48:14.100297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.959 [2024-11-15 12:48:14.100327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.959 qpair failed and we were unable to recover it. 00:26:33.959 [2024-11-15 12:48:14.110183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.959 [2024-11-15 12:48:14.110270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.959 [2024-11-15 12:48:14.110295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.959 [2024-11-15 12:48:14.110310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.959 [2024-11-15 12:48:14.110322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.959 [2024-11-15 12:48:14.110352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.959 qpair failed and we were unable to recover it. 00:26:33.959 [2024-11-15 12:48:14.120154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.959 [2024-11-15 12:48:14.120237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.959 [2024-11-15 12:48:14.120262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.959 [2024-11-15 12:48:14.120276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.959 [2024-11-15 12:48:14.120289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.959 [2024-11-15 12:48:14.120319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.959 qpair failed and we were unable to recover it. 00:26:33.959 [2024-11-15 12:48:14.130124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.959 [2024-11-15 12:48:14.130218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.959 [2024-11-15 12:48:14.130250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.959 [2024-11-15 12:48:14.130269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.959 [2024-11-15 12:48:14.130282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.959 [2024-11-15 12:48:14.130314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.959 qpair failed and we were unable to recover it. 00:26:33.960 [2024-11-15 12:48:14.140248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.960 [2024-11-15 12:48:14.140378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.960 [2024-11-15 12:48:14.140410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.960 [2024-11-15 12:48:14.140426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.960 [2024-11-15 12:48:14.140438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.960 [2024-11-15 12:48:14.140469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.960 qpair failed and we were unable to recover it. 00:26:33.960 [2024-11-15 12:48:14.150258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.960 [2024-11-15 12:48:14.150349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.960 [2024-11-15 12:48:14.150376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.960 [2024-11-15 12:48:14.150391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.960 [2024-11-15 12:48:14.150403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.960 [2024-11-15 12:48:14.150433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.960 qpair failed and we were unable to recover it. 00:26:33.960 [2024-11-15 12:48:14.160262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.960 [2024-11-15 12:48:14.160346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.960 [2024-11-15 12:48:14.160372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.960 [2024-11-15 12:48:14.160387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.960 [2024-11-15 12:48:14.160400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.960 [2024-11-15 12:48:14.160430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.960 qpair failed and we were unable to recover it. 00:26:33.960 [2024-11-15 12:48:14.170288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.960 [2024-11-15 12:48:14.170371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.960 [2024-11-15 12:48:14.170397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.960 [2024-11-15 12:48:14.170412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.960 [2024-11-15 12:48:14.170424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.960 [2024-11-15 12:48:14.170454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.960 qpair failed and we were unable to recover it. 00:26:33.960 [2024-11-15 12:48:14.180301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.960 [2024-11-15 12:48:14.180388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.960 [2024-11-15 12:48:14.180414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.960 [2024-11-15 12:48:14.180437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.960 [2024-11-15 12:48:14.180450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.960 [2024-11-15 12:48:14.180482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.960 qpair failed and we were unable to recover it. 00:26:33.960 [2024-11-15 12:48:14.190297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.960 [2024-11-15 12:48:14.190417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.960 [2024-11-15 12:48:14.190443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.960 [2024-11-15 12:48:14.190458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.960 [2024-11-15 12:48:14.190471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.960 [2024-11-15 12:48:14.190501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.960 qpair failed and we were unable to recover it. 00:26:33.960 [2024-11-15 12:48:14.200344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.960 [2024-11-15 12:48:14.200425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.960 [2024-11-15 12:48:14.200450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.960 [2024-11-15 12:48:14.200464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.960 [2024-11-15 12:48:14.200476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.960 [2024-11-15 12:48:14.200507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.960 qpair failed and we were unable to recover it. 00:26:33.960 [2024-11-15 12:48:14.210361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.960 [2024-11-15 12:48:14.210443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.960 [2024-11-15 12:48:14.210470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.960 [2024-11-15 12:48:14.210484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.960 [2024-11-15 12:48:14.210496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.960 [2024-11-15 12:48:14.210526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.960 qpair failed and we were unable to recover it. 00:26:33.960 [2024-11-15 12:48:14.220426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.960 [2024-11-15 12:48:14.220518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.960 [2024-11-15 12:48:14.220544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.960 [2024-11-15 12:48:14.220559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.960 [2024-11-15 12:48:14.220571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.960 [2024-11-15 12:48:14.220606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.960 qpair failed and we were unable to recover it. 00:26:33.960 [2024-11-15 12:48:14.230421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.960 [2024-11-15 12:48:14.230505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.960 [2024-11-15 12:48:14.230531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.960 [2024-11-15 12:48:14.230546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.960 [2024-11-15 12:48:14.230558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.960 [2024-11-15 12:48:14.230588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.960 qpair failed and we were unable to recover it. 00:26:33.960 [2024-11-15 12:48:14.240444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.960 [2024-11-15 12:48:14.240575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.960 [2024-11-15 12:48:14.240601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.960 [2024-11-15 12:48:14.240615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.960 [2024-11-15 12:48:14.240627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.960 [2024-11-15 12:48:14.240657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.960 qpair failed and we were unable to recover it. 00:26:33.960 [2024-11-15 12:48:14.250485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.960 [2024-11-15 12:48:14.250570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.961 [2024-11-15 12:48:14.250596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.961 [2024-11-15 12:48:14.250610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.961 [2024-11-15 12:48:14.250622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.961 [2024-11-15 12:48:14.250652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.961 qpair failed and we were unable to recover it. 00:26:33.961 [2024-11-15 12:48:14.260525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.961 [2024-11-15 12:48:14.260645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.961 [2024-11-15 12:48:14.260671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.961 [2024-11-15 12:48:14.260685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.961 [2024-11-15 12:48:14.260698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.961 [2024-11-15 12:48:14.260737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.961 qpair failed and we were unable to recover it. 00:26:33.961 [2024-11-15 12:48:14.270570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.961 [2024-11-15 12:48:14.270659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.961 [2024-11-15 12:48:14.270686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.961 [2024-11-15 12:48:14.270701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.961 [2024-11-15 12:48:14.270713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.961 [2024-11-15 12:48:14.270752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.961 qpair failed and we were unable to recover it. 00:26:33.961 [2024-11-15 12:48:14.280564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.961 [2024-11-15 12:48:14.280650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.961 [2024-11-15 12:48:14.280676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.961 [2024-11-15 12:48:14.280690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.961 [2024-11-15 12:48:14.280702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.961 [2024-11-15 12:48:14.280740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.961 qpair failed and we were unable to recover it. 00:26:33.961 [2024-11-15 12:48:14.290610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.961 [2024-11-15 12:48:14.290731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.961 [2024-11-15 12:48:14.290758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.961 [2024-11-15 12:48:14.290772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.961 [2024-11-15 12:48:14.290785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.961 [2024-11-15 12:48:14.290815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.961 qpair failed and we were unable to recover it. 00:26:33.961 [2024-11-15 12:48:14.300623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.961 [2024-11-15 12:48:14.300714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.961 [2024-11-15 12:48:14.300746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.961 [2024-11-15 12:48:14.300761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.961 [2024-11-15 12:48:14.300773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:33.961 [2024-11-15 12:48:14.300803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.961 qpair failed and we were unable to recover it. 00:26:34.220 [2024-11-15 12:48:14.310653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.220 [2024-11-15 12:48:14.310778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.220 [2024-11-15 12:48:14.310804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.220 [2024-11-15 12:48:14.310825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.220 [2024-11-15 12:48:14.310838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.220 [2024-11-15 12:48:14.310868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.220 qpair failed and we were unable to recover it. 00:26:34.220 [2024-11-15 12:48:14.320690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.220 [2024-11-15 12:48:14.320787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.220 [2024-11-15 12:48:14.320813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.220 [2024-11-15 12:48:14.320827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.220 [2024-11-15 12:48:14.320839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.220 [2024-11-15 12:48:14.320869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.220 qpair failed and we were unable to recover it. 00:26:34.220 [2024-11-15 12:48:14.330805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.220 [2024-11-15 12:48:14.330890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.220 [2024-11-15 12:48:14.330916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.220 [2024-11-15 12:48:14.330930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.220 [2024-11-15 12:48:14.330942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.220 [2024-11-15 12:48:14.330972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.220 qpair failed and we were unable to recover it. 00:26:34.220 [2024-11-15 12:48:14.340754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.220 [2024-11-15 12:48:14.340846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.220 [2024-11-15 12:48:14.340872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.220 [2024-11-15 12:48:14.340886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.220 [2024-11-15 12:48:14.340898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.220 [2024-11-15 12:48:14.340929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.220 qpair failed and we were unable to recover it. 00:26:34.220 [2024-11-15 12:48:14.350797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.220 [2024-11-15 12:48:14.350888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.220 [2024-11-15 12:48:14.350914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.220 [2024-11-15 12:48:14.350929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.220 [2024-11-15 12:48:14.350941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.220 [2024-11-15 12:48:14.350977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.220 qpair failed and we were unable to recover it. 00:26:34.220 [2024-11-15 12:48:14.360778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.220 [2024-11-15 12:48:14.360865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.220 [2024-11-15 12:48:14.360892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.220 [2024-11-15 12:48:14.360907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.220 [2024-11-15 12:48:14.360919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.220 [2024-11-15 12:48:14.360949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.220 qpair failed and we were unable to recover it. 00:26:34.220 [2024-11-15 12:48:14.370916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.221 [2024-11-15 12:48:14.371003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.221 [2024-11-15 12:48:14.371029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.221 [2024-11-15 12:48:14.371043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.221 [2024-11-15 12:48:14.371056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.221 [2024-11-15 12:48:14.371085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.221 qpair failed and we were unable to recover it. 00:26:34.221 [2024-11-15 12:48:14.380889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.221 [2024-11-15 12:48:14.380984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.221 [2024-11-15 12:48:14.381017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.221 [2024-11-15 12:48:14.381035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.221 [2024-11-15 12:48:14.381048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.221 [2024-11-15 12:48:14.381081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.221 qpair failed and we were unable to recover it. 00:26:34.221 [2024-11-15 12:48:14.390906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.221 [2024-11-15 12:48:14.390997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.221 [2024-11-15 12:48:14.391024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.221 [2024-11-15 12:48:14.391038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.221 [2024-11-15 12:48:14.391051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.221 [2024-11-15 12:48:14.391082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.221 qpair failed and we were unable to recover it. 00:26:34.221 [2024-11-15 12:48:14.400973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.221 [2024-11-15 12:48:14.401093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.221 [2024-11-15 12:48:14.401119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.221 [2024-11-15 12:48:14.401133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.221 [2024-11-15 12:48:14.401145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.221 [2024-11-15 12:48:14.401176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.221 qpair failed and we were unable to recover it. 00:26:34.221 [2024-11-15 12:48:14.410946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.221 [2024-11-15 12:48:14.411040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.221 [2024-11-15 12:48:14.411066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.221 [2024-11-15 12:48:14.411081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.221 [2024-11-15 12:48:14.411093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.221 [2024-11-15 12:48:14.411123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.221 qpair failed and we were unable to recover it. 00:26:34.221 [2024-11-15 12:48:14.421066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.221 [2024-11-15 12:48:14.421210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.221 [2024-11-15 12:48:14.421235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.221 [2024-11-15 12:48:14.421248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.221 [2024-11-15 12:48:14.421260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.221 [2024-11-15 12:48:14.421290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.221 qpair failed and we were unable to recover it. 00:26:34.221 [2024-11-15 12:48:14.431035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.221 [2024-11-15 12:48:14.431123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.221 [2024-11-15 12:48:14.431149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.221 [2024-11-15 12:48:14.431163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.221 [2024-11-15 12:48:14.431175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.221 [2024-11-15 12:48:14.431205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.221 qpair failed and we were unable to recover it. 00:26:34.221 [2024-11-15 12:48:14.441133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.221 [2024-11-15 12:48:14.441266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.221 [2024-11-15 12:48:14.441300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.221 [2024-11-15 12:48:14.441316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.221 [2024-11-15 12:48:14.441329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.221 [2024-11-15 12:48:14.441359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.221 qpair failed and we were unable to recover it. 00:26:34.221 [2024-11-15 12:48:14.451109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.221 [2024-11-15 12:48:14.451200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.221 [2024-11-15 12:48:14.451225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.221 [2024-11-15 12:48:14.451240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.221 [2024-11-15 12:48:14.451252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.221 [2024-11-15 12:48:14.451282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.221 qpair failed and we were unable to recover it. 00:26:34.221 [2024-11-15 12:48:14.461072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.221 [2024-11-15 12:48:14.461164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.221 [2024-11-15 12:48:14.461190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.221 [2024-11-15 12:48:14.461204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.221 [2024-11-15 12:48:14.461217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.221 [2024-11-15 12:48:14.461247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.221 qpair failed and we were unable to recover it. 00:26:34.221 [2024-11-15 12:48:14.471134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.221 [2024-11-15 12:48:14.471218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.221 [2024-11-15 12:48:14.471243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.221 [2024-11-15 12:48:14.471258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.221 [2024-11-15 12:48:14.471270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.221 [2024-11-15 12:48:14.471300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.221 qpair failed and we were unable to recover it. 00:26:34.221 [2024-11-15 12:48:14.481182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.221 [2024-11-15 12:48:14.481272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.221 [2024-11-15 12:48:14.481301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.221 [2024-11-15 12:48:14.481318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.221 [2024-11-15 12:48:14.481338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.221 [2024-11-15 12:48:14.481370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.221 qpair failed and we were unable to recover it. 00:26:34.221 [2024-11-15 12:48:14.491166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.221 [2024-11-15 12:48:14.491265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.221 [2024-11-15 12:48:14.491292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.221 [2024-11-15 12:48:14.491306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.221 [2024-11-15 12:48:14.491318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.221 [2024-11-15 12:48:14.491348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.221 qpair failed and we were unable to recover it. 00:26:34.221 [2024-11-15 12:48:14.501219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.221 [2024-11-15 12:48:14.501309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.222 [2024-11-15 12:48:14.501334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.222 [2024-11-15 12:48:14.501349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.222 [2024-11-15 12:48:14.501361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.222 [2024-11-15 12:48:14.501391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.222 qpair failed and we were unable to recover it. 00:26:34.222 [2024-11-15 12:48:14.511206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.222 [2024-11-15 12:48:14.511296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.222 [2024-11-15 12:48:14.511321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.222 [2024-11-15 12:48:14.511336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.222 [2024-11-15 12:48:14.511349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.222 [2024-11-15 12:48:14.511379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.222 qpair failed and we were unable to recover it. 00:26:34.222 [2024-11-15 12:48:14.521237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.222 [2024-11-15 12:48:14.521328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.222 [2024-11-15 12:48:14.521353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.222 [2024-11-15 12:48:14.521368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.222 [2024-11-15 12:48:14.521380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.222 [2024-11-15 12:48:14.521410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.222 qpair failed and we were unable to recover it. 00:26:34.222 [2024-11-15 12:48:14.531297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.222 [2024-11-15 12:48:14.531425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.222 [2024-11-15 12:48:14.531451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.222 [2024-11-15 12:48:14.531465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.222 [2024-11-15 12:48:14.531477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.222 [2024-11-15 12:48:14.531506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.222 qpair failed and we were unable to recover it. 00:26:34.222 [2024-11-15 12:48:14.541351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.222 [2024-11-15 12:48:14.541445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.222 [2024-11-15 12:48:14.541470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.222 [2024-11-15 12:48:14.541485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.222 [2024-11-15 12:48:14.541498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.222 [2024-11-15 12:48:14.541528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.222 qpair failed and we were unable to recover it. 00:26:34.222 [2024-11-15 12:48:14.551334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.222 [2024-11-15 12:48:14.551432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.222 [2024-11-15 12:48:14.551458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.222 [2024-11-15 12:48:14.551472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.222 [2024-11-15 12:48:14.551484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.222 [2024-11-15 12:48:14.551514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.222 qpair failed and we were unable to recover it. 00:26:34.222 [2024-11-15 12:48:14.561348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.222 [2024-11-15 12:48:14.561432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.222 [2024-11-15 12:48:14.561458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.222 [2024-11-15 12:48:14.561472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.222 [2024-11-15 12:48:14.561484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.222 [2024-11-15 12:48:14.561514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.222 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.571367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.571448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.571479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.571494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.571507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.571537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.581427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.581514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.581539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.581554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.581566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.581596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.591481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.591599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.591625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.591639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.591651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.591681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.601487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.601571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.601597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.601614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.601628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.601658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.611497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.611583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.611609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.611623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.611641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.611672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.621581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.621677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.621714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.621737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.621750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.621787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.631597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.631697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.631740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.631763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.631776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.631808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.641600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.641682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.641710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.641736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.641750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.641781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.651639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.651762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.651790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.651804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.651817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.651848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.661672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.661775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.661813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.661832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.661845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.661876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.671678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.671818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.671847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.671862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.671875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.671905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.681747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.681848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.681875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.681889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.681902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.681932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.691790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.691891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.691917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.691931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.691943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.691974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.701879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.701971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.702011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.702026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.702038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.702068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.711784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.711869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.711895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.711909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.711923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.711953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.721829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.721914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.721940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.721954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.480 [2024-11-15 12:48:14.721966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.480 [2024-11-15 12:48:14.721996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.480 qpair failed and we were unable to recover it. 00:26:34.480 [2024-11-15 12:48:14.731900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.480 [2024-11-15 12:48:14.731982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.480 [2024-11-15 12:48:14.732008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.480 [2024-11-15 12:48:14.732022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.481 [2024-11-15 12:48:14.732034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.481 [2024-11-15 12:48:14.732065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.481 qpair failed and we were unable to recover it. 00:26:34.481 [2024-11-15 12:48:14.741895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.481 [2024-11-15 12:48:14.741983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.481 [2024-11-15 12:48:14.742008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.481 [2024-11-15 12:48:14.742028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.481 [2024-11-15 12:48:14.742042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.481 [2024-11-15 12:48:14.742071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.481 qpair failed and we were unable to recover it. 00:26:34.481 [2024-11-15 12:48:14.751906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.481 [2024-11-15 12:48:14.751990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.481 [2024-11-15 12:48:14.752015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.481 [2024-11-15 12:48:14.752030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.481 [2024-11-15 12:48:14.752042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.481 [2024-11-15 12:48:14.752072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.481 qpair failed and we were unable to recover it. 00:26:34.481 [2024-11-15 12:48:14.761962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.481 [2024-11-15 12:48:14.762049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.481 [2024-11-15 12:48:14.762075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.481 [2024-11-15 12:48:14.762091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.481 [2024-11-15 12:48:14.762104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.481 [2024-11-15 12:48:14.762146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.481 qpair failed and we were unable to recover it. 00:26:34.481 [2024-11-15 12:48:14.772005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.481 [2024-11-15 12:48:14.772097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.481 [2024-11-15 12:48:14.772123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.481 [2024-11-15 12:48:14.772137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.481 [2024-11-15 12:48:14.772150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.481 [2024-11-15 12:48:14.772180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.481 qpair failed and we were unable to recover it. 00:26:34.481 [2024-11-15 12:48:14.782053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.481 [2024-11-15 12:48:14.782145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.481 [2024-11-15 12:48:14.782170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.481 [2024-11-15 12:48:14.782184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.481 [2024-11-15 12:48:14.782197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.481 [2024-11-15 12:48:14.782232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.481 qpair failed and we were unable to recover it. 00:26:34.481 [2024-11-15 12:48:14.792015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.481 [2024-11-15 12:48:14.792106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.481 [2024-11-15 12:48:14.792132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.481 [2024-11-15 12:48:14.792147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.481 [2024-11-15 12:48:14.792159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.481 [2024-11-15 12:48:14.792190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.481 qpair failed and we were unable to recover it. 00:26:34.481 [2024-11-15 12:48:14.802071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.481 [2024-11-15 12:48:14.802156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.481 [2024-11-15 12:48:14.802182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.481 [2024-11-15 12:48:14.802196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.481 [2024-11-15 12:48:14.802208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.481 [2024-11-15 12:48:14.802238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.481 qpair failed and we were unable to recover it. 00:26:34.481 [2024-11-15 12:48:14.812075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.481 [2024-11-15 12:48:14.812157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.481 [2024-11-15 12:48:14.812183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.481 [2024-11-15 12:48:14.812197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.481 [2024-11-15 12:48:14.812210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.481 [2024-11-15 12:48:14.812239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.481 qpair failed and we were unable to recover it. 00:26:34.481 [2024-11-15 12:48:14.822099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.481 [2024-11-15 12:48:14.822186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.481 [2024-11-15 12:48:14.822211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.481 [2024-11-15 12:48:14.822226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.481 [2024-11-15 12:48:14.822238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.481 [2024-11-15 12:48:14.822268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.481 qpair failed and we were unable to recover it. 00:26:34.739 [2024-11-15 12:48:14.832217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.739 [2024-11-15 12:48:14.832347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.739 [2024-11-15 12:48:14.832375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.739 [2024-11-15 12:48:14.832391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.739 [2024-11-15 12:48:14.832403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.739 [2024-11-15 12:48:14.832433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.739 qpair failed and we were unable to recover it. 00:26:34.739 [2024-11-15 12:48:14.842201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.739 [2024-11-15 12:48:14.842289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.739 [2024-11-15 12:48:14.842316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.739 [2024-11-15 12:48:14.842330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.739 [2024-11-15 12:48:14.842342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.739 [2024-11-15 12:48:14.842372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.739 qpair failed and we were unable to recover it. 00:26:34.739 [2024-11-15 12:48:14.852163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.739 [2024-11-15 12:48:14.852257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.739 [2024-11-15 12:48:14.852283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.739 [2024-11-15 12:48:14.852297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.739 [2024-11-15 12:48:14.852310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.739 [2024-11-15 12:48:14.852340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.739 qpair failed and we were unable to recover it. 00:26:34.739 [2024-11-15 12:48:14.862273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.739 [2024-11-15 12:48:14.862398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.739 [2024-11-15 12:48:14.862426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.739 [2024-11-15 12:48:14.862441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.739 [2024-11-15 12:48:14.862453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.739 [2024-11-15 12:48:14.862483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.739 qpair failed and we were unable to recover it. 00:26:34.739 [2024-11-15 12:48:14.872257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.739 [2024-11-15 12:48:14.872344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.739 [2024-11-15 12:48:14.872371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.739 [2024-11-15 12:48:14.872391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.739 [2024-11-15 12:48:14.872404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.739 [2024-11-15 12:48:14.872434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.739 qpair failed and we were unable to recover it. 00:26:34.739 [2024-11-15 12:48:14.882319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.739 [2024-11-15 12:48:14.882417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.739 [2024-11-15 12:48:14.882449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.739 [2024-11-15 12:48:14.882467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.739 [2024-11-15 12:48:14.882480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.739 [2024-11-15 12:48:14.882511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.739 qpair failed and we were unable to recover it. 00:26:34.739 [2024-11-15 12:48:14.892309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.739 [2024-11-15 12:48:14.892398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.739 [2024-11-15 12:48:14.892426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.739 [2024-11-15 12:48:14.892440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.739 [2024-11-15 12:48:14.892453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.739 [2024-11-15 12:48:14.892483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.740 qpair failed and we were unable to recover it. 00:26:34.740 [2024-11-15 12:48:14.902346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.740 [2024-11-15 12:48:14.902437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.740 [2024-11-15 12:48:14.902463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.740 [2024-11-15 12:48:14.902478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.740 [2024-11-15 12:48:14.902490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.740 [2024-11-15 12:48:14.902520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.740 qpair failed and we were unable to recover it. 00:26:34.740 [2024-11-15 12:48:14.912398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.740 [2024-11-15 12:48:14.912503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.740 [2024-11-15 12:48:14.912530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.740 [2024-11-15 12:48:14.912544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.740 [2024-11-15 12:48:14.912557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.740 [2024-11-15 12:48:14.912592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.740 qpair failed and we were unable to recover it. 00:26:34.740 [2024-11-15 12:48:14.922368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.740 [2024-11-15 12:48:14.922480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.740 [2024-11-15 12:48:14.922506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.740 [2024-11-15 12:48:14.922521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.740 [2024-11-15 12:48:14.922533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.740 [2024-11-15 12:48:14.922562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.740 qpair failed and we were unable to recover it. 00:26:34.740 [2024-11-15 12:48:14.932421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.740 [2024-11-15 12:48:14.932502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.740 [2024-11-15 12:48:14.932531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.740 [2024-11-15 12:48:14.932547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.740 [2024-11-15 12:48:14.932560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.740 [2024-11-15 12:48:14.932590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.740 qpair failed and we were unable to recover it. 00:26:34.740 [2024-11-15 12:48:14.942437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.740 [2024-11-15 12:48:14.942528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.740 [2024-11-15 12:48:14.942555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.740 [2024-11-15 12:48:14.942569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.740 [2024-11-15 12:48:14.942582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.740 [2024-11-15 12:48:14.942612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.740 qpair failed and we were unable to recover it. 00:26:34.740 [2024-11-15 12:48:14.952504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.740 [2024-11-15 12:48:14.952594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.740 [2024-11-15 12:48:14.952620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.740 [2024-11-15 12:48:14.952635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.740 [2024-11-15 12:48:14.952647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.740 [2024-11-15 12:48:14.952677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.740 qpair failed and we were unable to recover it. 00:26:34.740 [2024-11-15 12:48:14.962482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.740 [2024-11-15 12:48:14.962567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.740 [2024-11-15 12:48:14.962593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.740 [2024-11-15 12:48:14.962608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.740 [2024-11-15 12:48:14.962620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.740 [2024-11-15 12:48:14.962650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.740 qpair failed and we were unable to recover it. 00:26:34.740 [2024-11-15 12:48:14.972548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.740 [2024-11-15 12:48:14.972632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.740 [2024-11-15 12:48:14.972657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.740 [2024-11-15 12:48:14.972671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.740 [2024-11-15 12:48:14.972684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.740 [2024-11-15 12:48:14.972713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.740 qpair failed and we were unable to recover it. 00:26:34.740 [2024-11-15 12:48:14.982648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.740 [2024-11-15 12:48:14.982745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.740 [2024-11-15 12:48:14.982771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.740 [2024-11-15 12:48:14.982786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.740 [2024-11-15 12:48:14.982798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.740 [2024-11-15 12:48:14.982829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.740 qpair failed and we were unable to recover it. 00:26:34.740 [2024-11-15 12:48:14.992611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.740 [2024-11-15 12:48:14.992748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.740 [2024-11-15 12:48:14.992774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.740 [2024-11-15 12:48:14.992789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.740 [2024-11-15 12:48:14.992801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.740 [2024-11-15 12:48:14.992832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.740 qpair failed and we were unable to recover it. 00:26:34.740 [2024-11-15 12:48:15.002613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.740 [2024-11-15 12:48:15.002698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.740 [2024-11-15 12:48:15.002739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.740 [2024-11-15 12:48:15.002757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.740 [2024-11-15 12:48:15.002769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.740 [2024-11-15 12:48:15.002801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.740 qpair failed and we were unable to recover it. 00:26:34.740 [2024-11-15 12:48:15.012643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.740 [2024-11-15 12:48:15.012743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.740 [2024-11-15 12:48:15.012770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.740 [2024-11-15 12:48:15.012784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.740 [2024-11-15 12:48:15.012797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.740 [2024-11-15 12:48:15.012827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.740 qpair failed and we were unable to recover it. 00:26:34.740 [2024-11-15 12:48:15.022696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.740 [2024-11-15 12:48:15.022817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.740 [2024-11-15 12:48:15.022844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.740 [2024-11-15 12:48:15.022858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.740 [2024-11-15 12:48:15.022871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.740 [2024-11-15 12:48:15.022901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.740 qpair failed and we were unable to recover it. 00:26:34.741 [2024-11-15 12:48:15.032728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.741 [2024-11-15 12:48:15.032816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.741 [2024-11-15 12:48:15.032843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.741 [2024-11-15 12:48:15.032857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.741 [2024-11-15 12:48:15.032869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.741 [2024-11-15 12:48:15.032899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.741 qpair failed and we were unable to recover it. 00:26:34.741 [2024-11-15 12:48:15.042770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.741 [2024-11-15 12:48:15.042857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.741 [2024-11-15 12:48:15.042884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.741 [2024-11-15 12:48:15.042899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.741 [2024-11-15 12:48:15.042916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.741 [2024-11-15 12:48:15.042948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.741 qpair failed and we were unable to recover it. 00:26:34.741 [2024-11-15 12:48:15.052774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.741 [2024-11-15 12:48:15.052898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.741 [2024-11-15 12:48:15.052924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.741 [2024-11-15 12:48:15.052939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.741 [2024-11-15 12:48:15.052951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.741 [2024-11-15 12:48:15.052981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.741 qpair failed and we were unable to recover it. 00:26:34.741 [2024-11-15 12:48:15.062830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.741 [2024-11-15 12:48:15.062943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.741 [2024-11-15 12:48:15.062968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.741 [2024-11-15 12:48:15.062983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.741 [2024-11-15 12:48:15.062996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.741 [2024-11-15 12:48:15.063026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.741 qpair failed and we were unable to recover it. 00:26:34.741 [2024-11-15 12:48:15.072875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.741 [2024-11-15 12:48:15.072959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.741 [2024-11-15 12:48:15.072985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.741 [2024-11-15 12:48:15.072999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.741 [2024-11-15 12:48:15.073012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:34.741 [2024-11-15 12:48:15.073042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.741 qpair failed and we were unable to recover it. 00:26:35.000 [2024-11-15 12:48:15.082864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.000 [2024-11-15 12:48:15.082950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.000 [2024-11-15 12:48:15.082976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.000 [2024-11-15 12:48:15.082990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.000 [2024-11-15 12:48:15.083003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.000 [2024-11-15 12:48:15.083032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.000 qpair failed and we were unable to recover it. 00:26:35.000 [2024-11-15 12:48:15.092946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.000 [2024-11-15 12:48:15.093037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.000 [2024-11-15 12:48:15.093064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.000 [2024-11-15 12:48:15.093078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.000 [2024-11-15 12:48:15.093091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.000 [2024-11-15 12:48:15.093121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.000 qpair failed and we were unable to recover it. 00:26:35.000 [2024-11-15 12:48:15.102921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.000 [2024-11-15 12:48:15.103012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.000 [2024-11-15 12:48:15.103037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.000 [2024-11-15 12:48:15.103051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.000 [2024-11-15 12:48:15.103064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.000 [2024-11-15 12:48:15.103095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.000 qpair failed and we were unable to recover it. 00:26:35.000 [2024-11-15 12:48:15.113025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.000 [2024-11-15 12:48:15.113124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.000 [2024-11-15 12:48:15.113150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.000 [2024-11-15 12:48:15.113165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.000 [2024-11-15 12:48:15.113177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.000 [2024-11-15 12:48:15.113207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.000 qpair failed and we were unable to recover it. 00:26:35.000 [2024-11-15 12:48:15.122997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.000 [2024-11-15 12:48:15.123089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.000 [2024-11-15 12:48:15.123115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.000 [2024-11-15 12:48:15.123129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.000 [2024-11-15 12:48:15.123141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.000 [2024-11-15 12:48:15.123172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.000 qpair failed and we were unable to recover it. 00:26:35.000 [2024-11-15 12:48:15.133034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.000 [2024-11-15 12:48:15.133133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.000 [2024-11-15 12:48:15.133171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.000 [2024-11-15 12:48:15.133189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.000 [2024-11-15 12:48:15.133201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.000 [2024-11-15 12:48:15.133233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.000 qpair failed and we were unable to recover it. 00:26:35.000 [2024-11-15 12:48:15.143039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.000 [2024-11-15 12:48:15.143133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.000 [2024-11-15 12:48:15.143160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.000 [2024-11-15 12:48:15.143175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.000 [2024-11-15 12:48:15.143187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.000 [2024-11-15 12:48:15.143218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.000 qpair failed and we were unable to recover it. 00:26:35.000 [2024-11-15 12:48:15.153074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.000 [2024-11-15 12:48:15.153163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.000 [2024-11-15 12:48:15.153190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.000 [2024-11-15 12:48:15.153205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.000 [2024-11-15 12:48:15.153217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.000 [2024-11-15 12:48:15.153248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.000 qpair failed and we were unable to recover it. 00:26:35.000 [2024-11-15 12:48:15.163087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.000 [2024-11-15 12:48:15.163173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.000 [2024-11-15 12:48:15.163199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.000 [2024-11-15 12:48:15.163213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.000 [2024-11-15 12:48:15.163225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.000 [2024-11-15 12:48:15.163256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.000 qpair failed and we were unable to recover it. 00:26:35.000 [2024-11-15 12:48:15.173097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.000 [2024-11-15 12:48:15.173181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.000 [2024-11-15 12:48:15.173207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.000 [2024-11-15 12:48:15.173223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.000 [2024-11-15 12:48:15.173241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.000 [2024-11-15 12:48:15.173273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.000 qpair failed and we were unable to recover it. 00:26:35.000 [2024-11-15 12:48:15.183184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.000 [2024-11-15 12:48:15.183277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.000 [2024-11-15 12:48:15.183307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.001 [2024-11-15 12:48:15.183323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.001 [2024-11-15 12:48:15.183335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.001 [2024-11-15 12:48:15.183366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.001 qpair failed and we were unable to recover it. 00:26:35.001 [2024-11-15 12:48:15.193190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.001 [2024-11-15 12:48:15.193279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.001 [2024-11-15 12:48:15.193306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.001 [2024-11-15 12:48:15.193320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.001 [2024-11-15 12:48:15.193332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.001 [2024-11-15 12:48:15.193363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.001 qpair failed and we were unable to recover it. 00:26:35.001 [2024-11-15 12:48:15.203194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.001 [2024-11-15 12:48:15.203286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.001 [2024-11-15 12:48:15.203312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.001 [2024-11-15 12:48:15.203327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.001 [2024-11-15 12:48:15.203339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.001 [2024-11-15 12:48:15.203368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.001 qpair failed and we were unable to recover it. 00:26:35.001 [2024-11-15 12:48:15.213195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.001 [2024-11-15 12:48:15.213276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.001 [2024-11-15 12:48:15.213301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.001 [2024-11-15 12:48:15.213316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.001 [2024-11-15 12:48:15.213328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.001 [2024-11-15 12:48:15.213358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.001 qpair failed and we were unable to recover it. 00:26:35.001 [2024-11-15 12:48:15.223310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.001 [2024-11-15 12:48:15.223407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.001 [2024-11-15 12:48:15.223433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.001 [2024-11-15 12:48:15.223448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.001 [2024-11-15 12:48:15.223460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.001 [2024-11-15 12:48:15.223490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.001 qpair failed and we were unable to recover it. 00:26:35.001 [2024-11-15 12:48:15.233262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.001 [2024-11-15 12:48:15.233348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.001 [2024-11-15 12:48:15.233374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.001 [2024-11-15 12:48:15.233389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.001 [2024-11-15 12:48:15.233402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.001 [2024-11-15 12:48:15.233432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.001 qpair failed and we were unable to recover it. 00:26:35.001 [2024-11-15 12:48:15.243315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.001 [2024-11-15 12:48:15.243403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.001 [2024-11-15 12:48:15.243432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.001 [2024-11-15 12:48:15.243448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.001 [2024-11-15 12:48:15.243461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.001 [2024-11-15 12:48:15.243491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.001 qpair failed and we were unable to recover it. 00:26:35.001 [2024-11-15 12:48:15.253366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.001 [2024-11-15 12:48:15.253454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.001 [2024-11-15 12:48:15.253481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.001 [2024-11-15 12:48:15.253495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.001 [2024-11-15 12:48:15.253508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.001 [2024-11-15 12:48:15.253538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.001 qpair failed and we were unable to recover it. 00:26:35.001 [2024-11-15 12:48:15.263388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.001 [2024-11-15 12:48:15.263504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.001 [2024-11-15 12:48:15.263531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.001 [2024-11-15 12:48:15.263545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.001 [2024-11-15 12:48:15.263557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.001 [2024-11-15 12:48:15.263588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.001 qpair failed and we were unable to recover it. 00:26:35.001 [2024-11-15 12:48:15.273399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.001 [2024-11-15 12:48:15.273485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.001 [2024-11-15 12:48:15.273510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.001 [2024-11-15 12:48:15.273526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.001 [2024-11-15 12:48:15.273538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.001 [2024-11-15 12:48:15.273567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.001 qpair failed and we were unable to recover it. 00:26:35.001 [2024-11-15 12:48:15.283435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.001 [2024-11-15 12:48:15.283517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.001 [2024-11-15 12:48:15.283543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.001 [2024-11-15 12:48:15.283558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.001 [2024-11-15 12:48:15.283570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.001 [2024-11-15 12:48:15.283599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.001 qpair failed and we were unable to recover it. 00:26:35.001 [2024-11-15 12:48:15.293457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.001 [2024-11-15 12:48:15.293541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.001 [2024-11-15 12:48:15.293567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.001 [2024-11-15 12:48:15.293582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.001 [2024-11-15 12:48:15.293594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.001 [2024-11-15 12:48:15.293624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.001 qpair failed and we were unable to recover it. 00:26:35.001 [2024-11-15 12:48:15.303503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.002 [2024-11-15 12:48:15.303604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.002 [2024-11-15 12:48:15.303630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.002 [2024-11-15 12:48:15.303650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.002 [2024-11-15 12:48:15.303662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.002 [2024-11-15 12:48:15.303692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.002 qpair failed and we were unable to recover it. 00:26:35.002 [2024-11-15 12:48:15.313484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.002 [2024-11-15 12:48:15.313570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.002 [2024-11-15 12:48:15.313596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.002 [2024-11-15 12:48:15.313611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.002 [2024-11-15 12:48:15.313623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.002 [2024-11-15 12:48:15.313652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.002 qpair failed and we were unable to recover it. 00:26:35.002 [2024-11-15 12:48:15.323535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.002 [2024-11-15 12:48:15.323653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.002 [2024-11-15 12:48:15.323678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.002 [2024-11-15 12:48:15.323693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.002 [2024-11-15 12:48:15.323705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.002 [2024-11-15 12:48:15.323742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.002 qpair failed and we were unable to recover it. 00:26:35.002 [2024-11-15 12:48:15.333541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.002 [2024-11-15 12:48:15.333625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.002 [2024-11-15 12:48:15.333650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.002 [2024-11-15 12:48:15.333665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.002 [2024-11-15 12:48:15.333677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.002 [2024-11-15 12:48:15.333707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.002 qpair failed and we were unable to recover it. 00:26:35.261 [2024-11-15 12:48:15.343582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.261 [2024-11-15 12:48:15.343673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.261 [2024-11-15 12:48:15.343699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.261 [2024-11-15 12:48:15.343713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.261 [2024-11-15 12:48:15.343735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.261 [2024-11-15 12:48:15.343772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.261 qpair failed and we were unable to recover it. 00:26:35.261 [2024-11-15 12:48:15.353605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.261 [2024-11-15 12:48:15.353737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.261 [2024-11-15 12:48:15.353763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.261 [2024-11-15 12:48:15.353777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.261 [2024-11-15 12:48:15.353790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.261 [2024-11-15 12:48:15.353820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.261 qpair failed and we were unable to recover it. 00:26:35.261 [2024-11-15 12:48:15.363642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.261 [2024-11-15 12:48:15.363732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.261 [2024-11-15 12:48:15.363758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.261 [2024-11-15 12:48:15.363773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.261 [2024-11-15 12:48:15.363785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.261 [2024-11-15 12:48:15.363815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.261 qpair failed and we were unable to recover it. 00:26:35.261 [2024-11-15 12:48:15.373659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.261 [2024-11-15 12:48:15.373748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.261 [2024-11-15 12:48:15.373773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.261 [2024-11-15 12:48:15.373788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.261 [2024-11-15 12:48:15.373799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.261 [2024-11-15 12:48:15.373830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.261 qpair failed and we were unable to recover it. 00:26:35.261 [2024-11-15 12:48:15.383695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.261 [2024-11-15 12:48:15.383820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.261 [2024-11-15 12:48:15.383849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.261 [2024-11-15 12:48:15.383864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.261 [2024-11-15 12:48:15.383877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.261 [2024-11-15 12:48:15.383911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.261 qpair failed and we were unable to recover it. 00:26:35.261 [2024-11-15 12:48:15.393768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.261 [2024-11-15 12:48:15.393881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.261 [2024-11-15 12:48:15.393908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.261 [2024-11-15 12:48:15.393923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.261 [2024-11-15 12:48:15.393936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.261 [2024-11-15 12:48:15.393967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.261 qpair failed and we were unable to recover it. 00:26:35.261 [2024-11-15 12:48:15.403766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.261 [2024-11-15 12:48:15.403862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.261 [2024-11-15 12:48:15.403889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.261 [2024-11-15 12:48:15.403903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.261 [2024-11-15 12:48:15.403915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.261 [2024-11-15 12:48:15.403946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.261 qpair failed and we were unable to recover it. 00:26:35.261 [2024-11-15 12:48:15.413789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.261 [2024-11-15 12:48:15.413898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.261 [2024-11-15 12:48:15.413924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.261 [2024-11-15 12:48:15.413938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.261 [2024-11-15 12:48:15.413951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.261 [2024-11-15 12:48:15.413981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.261 qpair failed and we were unable to recover it. 00:26:35.261 [2024-11-15 12:48:15.423835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.261 [2024-11-15 12:48:15.423922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.261 [2024-11-15 12:48:15.423948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.261 [2024-11-15 12:48:15.423961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.261 [2024-11-15 12:48:15.423974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.261 [2024-11-15 12:48:15.424004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.261 qpair failed and we were unable to recover it. 00:26:35.261 [2024-11-15 12:48:15.433847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.261 [2024-11-15 12:48:15.433957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.261 [2024-11-15 12:48:15.433984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.261 [2024-11-15 12:48:15.434004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.261 [2024-11-15 12:48:15.434017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.261 [2024-11-15 12:48:15.434047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.261 qpair failed and we were unable to recover it. 00:26:35.261 [2024-11-15 12:48:15.443857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.261 [2024-11-15 12:48:15.443955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.261 [2024-11-15 12:48:15.443981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.261 [2024-11-15 12:48:15.443995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.261 [2024-11-15 12:48:15.444008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.262 [2024-11-15 12:48:15.444038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.262 qpair failed and we were unable to recover it. 00:26:35.262 [2024-11-15 12:48:15.453925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.262 [2024-11-15 12:48:15.454046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.262 [2024-11-15 12:48:15.454072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.262 [2024-11-15 12:48:15.454086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.262 [2024-11-15 12:48:15.454098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.262 [2024-11-15 12:48:15.454128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.262 qpair failed and we were unable to recover it. 00:26:35.262 [2024-11-15 12:48:15.463921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.262 [2024-11-15 12:48:15.464009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.262 [2024-11-15 12:48:15.464034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.262 [2024-11-15 12:48:15.464049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.262 [2024-11-15 12:48:15.464061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.262 [2024-11-15 12:48:15.464091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.262 qpair failed and we were unable to recover it. 00:26:35.262 [2024-11-15 12:48:15.473946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.262 [2024-11-15 12:48:15.474035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.262 [2024-11-15 12:48:15.474061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.262 [2024-11-15 12:48:15.474075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.262 [2024-11-15 12:48:15.474087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.262 [2024-11-15 12:48:15.474123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.262 qpair failed and we were unable to recover it. 00:26:35.262 [2024-11-15 12:48:15.483986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.262 [2024-11-15 12:48:15.484071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.262 [2024-11-15 12:48:15.484096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.262 [2024-11-15 12:48:15.484110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.262 [2024-11-15 12:48:15.484123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.262 [2024-11-15 12:48:15.484153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.262 qpair failed and we were unable to recover it. 00:26:35.262 [2024-11-15 12:48:15.494063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.262 [2024-11-15 12:48:15.494147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.262 [2024-11-15 12:48:15.494173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.262 [2024-11-15 12:48:15.494187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.262 [2024-11-15 12:48:15.494199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.262 [2024-11-15 12:48:15.494229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.262 qpair failed and we were unable to recover it. 00:26:35.262 [2024-11-15 12:48:15.504048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.262 [2024-11-15 12:48:15.504135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.262 [2024-11-15 12:48:15.504161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.262 [2024-11-15 12:48:15.504175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.262 [2024-11-15 12:48:15.504187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.262 [2024-11-15 12:48:15.504216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.262 qpair failed and we were unable to recover it. 00:26:35.262 [2024-11-15 12:48:15.514081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.262 [2024-11-15 12:48:15.514167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.262 [2024-11-15 12:48:15.514192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.262 [2024-11-15 12:48:15.514206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.262 [2024-11-15 12:48:15.514218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.262 [2024-11-15 12:48:15.514247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.262 qpair failed and we were unable to recover it. 00:26:35.262 [2024-11-15 12:48:15.524104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.262 [2024-11-15 12:48:15.524186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.262 [2024-11-15 12:48:15.524212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.262 [2024-11-15 12:48:15.524227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.262 [2024-11-15 12:48:15.524239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.262 [2024-11-15 12:48:15.524269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.262 qpair failed and we were unable to recover it. 00:26:35.262 [2024-11-15 12:48:15.534166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.262 [2024-11-15 12:48:15.534259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.262 [2024-11-15 12:48:15.534285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.262 [2024-11-15 12:48:15.534300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.262 [2024-11-15 12:48:15.534312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.262 [2024-11-15 12:48:15.534341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.262 qpair failed and we were unable to recover it. 00:26:35.262 [2024-11-15 12:48:15.544226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.262 [2024-11-15 12:48:15.544320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.262 [2024-11-15 12:48:15.544346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.262 [2024-11-15 12:48:15.544360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.262 [2024-11-15 12:48:15.544372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.262 [2024-11-15 12:48:15.544402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.262 qpair failed and we were unable to recover it. 00:26:35.262 [2024-11-15 12:48:15.554218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.262 [2024-11-15 12:48:15.554305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.262 [2024-11-15 12:48:15.554331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.262 [2024-11-15 12:48:15.554345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.262 [2024-11-15 12:48:15.554358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.262 [2024-11-15 12:48:15.554388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.262 qpair failed and we were unable to recover it. 00:26:35.262 [2024-11-15 12:48:15.564229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.262 [2024-11-15 12:48:15.564313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.262 [2024-11-15 12:48:15.564348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.262 [2024-11-15 12:48:15.564365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.262 [2024-11-15 12:48:15.564378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.262 [2024-11-15 12:48:15.564409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.262 qpair failed and we were unable to recover it. 00:26:35.262 [2024-11-15 12:48:15.574250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.262 [2024-11-15 12:48:15.574329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.262 [2024-11-15 12:48:15.574356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.262 [2024-11-15 12:48:15.574370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.262 [2024-11-15 12:48:15.574382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.262 [2024-11-15 12:48:15.574412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.262 qpair failed and we were unable to recover it. 00:26:35.263 [2024-11-15 12:48:15.584382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.263 [2024-11-15 12:48:15.584487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.263 [2024-11-15 12:48:15.584513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.263 [2024-11-15 12:48:15.584528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.263 [2024-11-15 12:48:15.584540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.263 [2024-11-15 12:48:15.584570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.263 qpair failed and we were unable to recover it. 00:26:35.263 [2024-11-15 12:48:15.594344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.263 [2024-11-15 12:48:15.594430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.263 [2024-11-15 12:48:15.594456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.263 [2024-11-15 12:48:15.594470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.263 [2024-11-15 12:48:15.594482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.263 [2024-11-15 12:48:15.594512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.263 qpair failed and we were unable to recover it. 00:26:35.521 [2024-11-15 12:48:15.604386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.521 [2024-11-15 12:48:15.604466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.521 [2024-11-15 12:48:15.604492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.521 [2024-11-15 12:48:15.604506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.521 [2024-11-15 12:48:15.604524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.521 [2024-11-15 12:48:15.604556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.521 qpair failed and we were unable to recover it. 00:26:35.521 [2024-11-15 12:48:15.614428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.521 [2024-11-15 12:48:15.614511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.521 [2024-11-15 12:48:15.614540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.521 [2024-11-15 12:48:15.614556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.521 [2024-11-15 12:48:15.614568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.521 [2024-11-15 12:48:15.614598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.521 qpair failed and we were unable to recover it. 00:26:35.521 [2024-11-15 12:48:15.624369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.521 [2024-11-15 12:48:15.624460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.521 [2024-11-15 12:48:15.624487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.521 [2024-11-15 12:48:15.624501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.521 [2024-11-15 12:48:15.624513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.521 [2024-11-15 12:48:15.624543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.521 qpair failed and we were unable to recover it. 00:26:35.521 [2024-11-15 12:48:15.634411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.521 [2024-11-15 12:48:15.634499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.522 [2024-11-15 12:48:15.634534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.522 [2024-11-15 12:48:15.634554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.522 [2024-11-15 12:48:15.634567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.522 [2024-11-15 12:48:15.634599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.522 qpair failed and we were unable to recover it. 00:26:35.522 [2024-11-15 12:48:15.644488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.522 [2024-11-15 12:48:15.644601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.522 [2024-11-15 12:48:15.644628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.522 [2024-11-15 12:48:15.644643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.522 [2024-11-15 12:48:15.644656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.522 [2024-11-15 12:48:15.644686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.522 qpair failed and we were unable to recover it. 00:26:35.522 [2024-11-15 12:48:15.654448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.522 [2024-11-15 12:48:15.654540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.522 [2024-11-15 12:48:15.654567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.522 [2024-11-15 12:48:15.654582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.522 [2024-11-15 12:48:15.654594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.522 [2024-11-15 12:48:15.654624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.522 qpair failed and we were unable to recover it. 00:26:35.522 [2024-11-15 12:48:15.664522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.522 [2024-11-15 12:48:15.664638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.522 [2024-11-15 12:48:15.664664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.522 [2024-11-15 12:48:15.664679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.522 [2024-11-15 12:48:15.664691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.522 [2024-11-15 12:48:15.664727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.522 qpair failed and we were unable to recover it. 00:26:35.522 [2024-11-15 12:48:15.674513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.522 [2024-11-15 12:48:15.674601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.522 [2024-11-15 12:48:15.674627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.522 [2024-11-15 12:48:15.674642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.522 [2024-11-15 12:48:15.674654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.522 [2024-11-15 12:48:15.674683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.522 qpair failed and we were unable to recover it. 00:26:35.522 [2024-11-15 12:48:15.684587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.522 [2024-11-15 12:48:15.684674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.522 [2024-11-15 12:48:15.684700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.522 [2024-11-15 12:48:15.684714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.522 [2024-11-15 12:48:15.684736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.522 [2024-11-15 12:48:15.684766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.522 qpair failed and we were unable to recover it. 00:26:35.522 [2024-11-15 12:48:15.694633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.522 [2024-11-15 12:48:15.694736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.522 [2024-11-15 12:48:15.694767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.522 [2024-11-15 12:48:15.694783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.522 [2024-11-15 12:48:15.694795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.522 [2024-11-15 12:48:15.694825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.522 qpair failed and we were unable to recover it. 00:26:35.522 [2024-11-15 12:48:15.704606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.522 [2024-11-15 12:48:15.704698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.522 [2024-11-15 12:48:15.704732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.522 [2024-11-15 12:48:15.704748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.522 [2024-11-15 12:48:15.704760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.522 [2024-11-15 12:48:15.704792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.522 qpair failed and we were unable to recover it. 00:26:35.522 [2024-11-15 12:48:15.714653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.522 [2024-11-15 12:48:15.714740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.522 [2024-11-15 12:48:15.714767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.522 [2024-11-15 12:48:15.714782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.522 [2024-11-15 12:48:15.714795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.522 [2024-11-15 12:48:15.714838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.522 qpair failed and we were unable to recover it. 00:26:35.522 [2024-11-15 12:48:15.724644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.522 [2024-11-15 12:48:15.724774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.522 [2024-11-15 12:48:15.724801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.522 [2024-11-15 12:48:15.724815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.522 [2024-11-15 12:48:15.724828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.522 [2024-11-15 12:48:15.724858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.522 qpair failed and we were unable to recover it. 00:26:35.522 [2024-11-15 12:48:15.734687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.522 [2024-11-15 12:48:15.734793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.522 [2024-11-15 12:48:15.734820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.522 [2024-11-15 12:48:15.734834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.522 [2024-11-15 12:48:15.734852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.522 [2024-11-15 12:48:15.734884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.522 qpair failed and we were unable to recover it. 00:26:35.522 [2024-11-15 12:48:15.744792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.522 [2024-11-15 12:48:15.744930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.522 [2024-11-15 12:48:15.744955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.522 [2024-11-15 12:48:15.744970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.522 [2024-11-15 12:48:15.744982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.522 [2024-11-15 12:48:15.745012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.522 qpair failed and we were unable to recover it. 00:26:35.522 [2024-11-15 12:48:15.754765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.523 [2024-11-15 12:48:15.754885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.523 [2024-11-15 12:48:15.754911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.523 [2024-11-15 12:48:15.754925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.523 [2024-11-15 12:48:15.754937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.523 [2024-11-15 12:48:15.754967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.523 qpair failed and we were unable to recover it. 00:26:35.523 [2024-11-15 12:48:15.764794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.523 [2024-11-15 12:48:15.764879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.523 [2024-11-15 12:48:15.764905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.523 [2024-11-15 12:48:15.764920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.523 [2024-11-15 12:48:15.764932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.523 [2024-11-15 12:48:15.764962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.523 qpair failed and we were unable to recover it. 00:26:35.523 [2024-11-15 12:48:15.774783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.523 [2024-11-15 12:48:15.774889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.523 [2024-11-15 12:48:15.774914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.523 [2024-11-15 12:48:15.774929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.523 [2024-11-15 12:48:15.774941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.523 [2024-11-15 12:48:15.774971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.523 qpair failed and we were unable to recover it. 00:26:35.523 [2024-11-15 12:48:15.784965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.523 [2024-11-15 12:48:15.785056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.523 [2024-11-15 12:48:15.785081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.523 [2024-11-15 12:48:15.785095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.523 [2024-11-15 12:48:15.785107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.523 [2024-11-15 12:48:15.785138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.523 qpair failed and we were unable to recover it. 00:26:35.523 [2024-11-15 12:48:15.794872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.523 [2024-11-15 12:48:15.794960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.523 [2024-11-15 12:48:15.794985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.523 [2024-11-15 12:48:15.794999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.523 [2024-11-15 12:48:15.795011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.523 [2024-11-15 12:48:15.795041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.523 qpair failed and we were unable to recover it. 00:26:35.523 [2024-11-15 12:48:15.804879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.523 [2024-11-15 12:48:15.804980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.523 [2024-11-15 12:48:15.805006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.523 [2024-11-15 12:48:15.805020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.523 [2024-11-15 12:48:15.805032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.523 [2024-11-15 12:48:15.805063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.523 qpair failed and we were unable to recover it. 00:26:35.523 [2024-11-15 12:48:15.814899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.523 [2024-11-15 12:48:15.814982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.523 [2024-11-15 12:48:15.815008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.523 [2024-11-15 12:48:15.815022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.523 [2024-11-15 12:48:15.815035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.523 [2024-11-15 12:48:15.815064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.523 qpair failed and we were unable to recover it. 00:26:35.523 [2024-11-15 12:48:15.824980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.523 [2024-11-15 12:48:15.825089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.523 [2024-11-15 12:48:15.825115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.523 [2024-11-15 12:48:15.825129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.523 [2024-11-15 12:48:15.825141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.523 [2024-11-15 12:48:15.825171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.523 qpair failed and we were unable to recover it. 00:26:35.523 [2024-11-15 12:48:15.834971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.523 [2024-11-15 12:48:15.835058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.523 [2024-11-15 12:48:15.835084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.523 [2024-11-15 12:48:15.835099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.523 [2024-11-15 12:48:15.835111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.523 [2024-11-15 12:48:15.835141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.523 qpair failed and we were unable to recover it. 00:26:35.523 [2024-11-15 12:48:15.845030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.523 [2024-11-15 12:48:15.845121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.523 [2024-11-15 12:48:15.845150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.523 [2024-11-15 12:48:15.845167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.523 [2024-11-15 12:48:15.845179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.523 [2024-11-15 12:48:15.845211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.523 qpair failed and we were unable to recover it. 00:26:35.523 [2024-11-15 12:48:15.855074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.523 [2024-11-15 12:48:15.855203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.523 [2024-11-15 12:48:15.855230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.523 [2024-11-15 12:48:15.855245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.523 [2024-11-15 12:48:15.855257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.523 [2024-11-15 12:48:15.855287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.523 qpair failed and we were unable to recover it. 00:26:35.783 [2024-11-15 12:48:15.865095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.783 [2024-11-15 12:48:15.865189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.783 [2024-11-15 12:48:15.865215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.783 [2024-11-15 12:48:15.865235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.783 [2024-11-15 12:48:15.865248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.783 [2024-11-15 12:48:15.865278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.783 qpair failed and we were unable to recover it. 00:26:35.783 [2024-11-15 12:48:15.875118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.783 [2024-11-15 12:48:15.875239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.783 [2024-11-15 12:48:15.875265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.783 [2024-11-15 12:48:15.875279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.783 [2024-11-15 12:48:15.875292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.783 [2024-11-15 12:48:15.875323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.783 qpair failed and we were unable to recover it. 00:26:35.783 [2024-11-15 12:48:15.885208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.783 [2024-11-15 12:48:15.885295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.783 [2024-11-15 12:48:15.885325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.783 [2024-11-15 12:48:15.885347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.783 [2024-11-15 12:48:15.885360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.783 [2024-11-15 12:48:15.885392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.783 qpair failed and we were unable to recover it. 00:26:35.783 [2024-11-15 12:48:15.895218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.783 [2024-11-15 12:48:15.895306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.783 [2024-11-15 12:48:15.895333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.783 [2024-11-15 12:48:15.895348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.783 [2024-11-15 12:48:15.895360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.783 [2024-11-15 12:48:15.895391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.783 qpair failed and we were unable to recover it. 00:26:35.783 [2024-11-15 12:48:15.905239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.783 [2024-11-15 12:48:15.905333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.783 [2024-11-15 12:48:15.905362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.783 [2024-11-15 12:48:15.905378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.783 [2024-11-15 12:48:15.905390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.783 [2024-11-15 12:48:15.905427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.783 qpair failed and we were unable to recover it. 00:26:35.783 [2024-11-15 12:48:15.915315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.783 [2024-11-15 12:48:15.915409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.783 [2024-11-15 12:48:15.915435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.783 [2024-11-15 12:48:15.915450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.783 [2024-11-15 12:48:15.915463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.783 [2024-11-15 12:48:15.915494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.783 qpair failed and we were unable to recover it. 00:26:35.783 [2024-11-15 12:48:15.925261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.783 [2024-11-15 12:48:15.925345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.783 [2024-11-15 12:48:15.925373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.783 [2024-11-15 12:48:15.925390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.783 [2024-11-15 12:48:15.925403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.783 [2024-11-15 12:48:15.925434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.783 qpair failed and we were unable to recover it. 00:26:35.783 [2024-11-15 12:48:15.935310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.783 [2024-11-15 12:48:15.935398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.783 [2024-11-15 12:48:15.935425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.783 [2024-11-15 12:48:15.935439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.783 [2024-11-15 12:48:15.935451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.783 [2024-11-15 12:48:15.935482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.783 qpair failed and we were unable to recover it. 00:26:35.783 [2024-11-15 12:48:15.945346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.783 [2024-11-15 12:48:15.945433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.783 [2024-11-15 12:48:15.945459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.783 [2024-11-15 12:48:15.945473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.783 [2024-11-15 12:48:15.945485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.783 [2024-11-15 12:48:15.945515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.783 qpair failed and we were unable to recover it. 00:26:35.783 [2024-11-15 12:48:15.955360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.783 [2024-11-15 12:48:15.955475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.783 [2024-11-15 12:48:15.955501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.783 [2024-11-15 12:48:15.955516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.783 [2024-11-15 12:48:15.955528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.783 [2024-11-15 12:48:15.955558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.783 qpair failed and we were unable to recover it. 00:26:35.783 [2024-11-15 12:48:15.965348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.783 [2024-11-15 12:48:15.965441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.783 [2024-11-15 12:48:15.965467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.783 [2024-11-15 12:48:15.965482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.783 [2024-11-15 12:48:15.965494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.783 [2024-11-15 12:48:15.965524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.783 qpair failed and we were unable to recover it. 00:26:35.783 [2024-11-15 12:48:15.975401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.783 [2024-11-15 12:48:15.975530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.783 [2024-11-15 12:48:15.975556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.783 [2024-11-15 12:48:15.975571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.783 [2024-11-15 12:48:15.975583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.783 [2024-11-15 12:48:15.975614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.783 qpair failed and we were unable to recover it. 00:26:35.784 [2024-11-15 12:48:15.985418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.784 [2024-11-15 12:48:15.985506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.784 [2024-11-15 12:48:15.985532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.784 [2024-11-15 12:48:15.985547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.784 [2024-11-15 12:48:15.985559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.784 [2024-11-15 12:48:15.985588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.784 qpair failed and we were unable to recover it. 00:26:35.784 [2024-11-15 12:48:15.995567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.784 [2024-11-15 12:48:15.995654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.784 [2024-11-15 12:48:15.995685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.784 [2024-11-15 12:48:15.995700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.784 [2024-11-15 12:48:15.995713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.784 [2024-11-15 12:48:15.995751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.784 qpair failed and we were unable to recover it. 00:26:35.784 [2024-11-15 12:48:16.005486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.784 [2024-11-15 12:48:16.005570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.784 [2024-11-15 12:48:16.005596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.784 [2024-11-15 12:48:16.005610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.784 [2024-11-15 12:48:16.005623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.784 [2024-11-15 12:48:16.005653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.784 qpair failed and we were unable to recover it. 00:26:35.784 [2024-11-15 12:48:16.015515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.784 [2024-11-15 12:48:16.015602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.784 [2024-11-15 12:48:16.015628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.784 [2024-11-15 12:48:16.015642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.784 [2024-11-15 12:48:16.015654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.784 [2024-11-15 12:48:16.015685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.784 qpair failed and we were unable to recover it. 00:26:35.784 [2024-11-15 12:48:16.025552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.784 [2024-11-15 12:48:16.025654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.784 [2024-11-15 12:48:16.025680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.784 [2024-11-15 12:48:16.025694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.784 [2024-11-15 12:48:16.025707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.784 [2024-11-15 12:48:16.025745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.784 qpair failed and we were unable to recover it. 00:26:35.784 [2024-11-15 12:48:16.035588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.784 [2024-11-15 12:48:16.035724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.784 [2024-11-15 12:48:16.035751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.784 [2024-11-15 12:48:16.035765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.784 [2024-11-15 12:48:16.035777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.784 [2024-11-15 12:48:16.035817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.784 qpair failed and we were unable to recover it. 00:26:35.784 [2024-11-15 12:48:16.045601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.784 [2024-11-15 12:48:16.045714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.784 [2024-11-15 12:48:16.045748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.784 [2024-11-15 12:48:16.045762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.784 [2024-11-15 12:48:16.045775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.784 [2024-11-15 12:48:16.045805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.784 qpair failed and we were unable to recover it. 00:26:35.784 [2024-11-15 12:48:16.055623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.784 [2024-11-15 12:48:16.055703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.784 [2024-11-15 12:48:16.055735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.784 [2024-11-15 12:48:16.055751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.784 [2024-11-15 12:48:16.055763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.784 [2024-11-15 12:48:16.055793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.784 qpair failed and we were unable to recover it. 00:26:35.784 [2024-11-15 12:48:16.065777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.784 [2024-11-15 12:48:16.065869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.784 [2024-11-15 12:48:16.065895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.784 [2024-11-15 12:48:16.065909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.784 [2024-11-15 12:48:16.065922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.784 [2024-11-15 12:48:16.065952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.784 qpair failed and we were unable to recover it. 00:26:35.784 [2024-11-15 12:48:16.075657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.784 [2024-11-15 12:48:16.075753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.784 [2024-11-15 12:48:16.075779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.784 [2024-11-15 12:48:16.075793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.784 [2024-11-15 12:48:16.075805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.784 [2024-11-15 12:48:16.075836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.784 qpair failed and we were unable to recover it. 00:26:35.784 [2024-11-15 12:48:16.085744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.784 [2024-11-15 12:48:16.085841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.784 [2024-11-15 12:48:16.085866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.784 [2024-11-15 12:48:16.085880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.784 [2024-11-15 12:48:16.085893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.784 [2024-11-15 12:48:16.085923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.784 qpair failed and we were unable to recover it. 00:26:35.784 [2024-11-15 12:48:16.095759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.784 [2024-11-15 12:48:16.095872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.784 [2024-11-15 12:48:16.095898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.784 [2024-11-15 12:48:16.095912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.784 [2024-11-15 12:48:16.095925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.784 [2024-11-15 12:48:16.095954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.784 qpair failed and we were unable to recover it. 00:26:35.784 [2024-11-15 12:48:16.105803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.784 [2024-11-15 12:48:16.105941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.784 [2024-11-15 12:48:16.105967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.784 [2024-11-15 12:48:16.105982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.784 [2024-11-15 12:48:16.105995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.785 [2024-11-15 12:48:16.106026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.785 qpair failed and we were unable to recover it. 00:26:35.785 [2024-11-15 12:48:16.115836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.785 [2024-11-15 12:48:16.115929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.785 [2024-11-15 12:48:16.115955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.785 [2024-11-15 12:48:16.115970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.785 [2024-11-15 12:48:16.115983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:35.785 [2024-11-15 12:48:16.116021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:35.785 qpair failed and we were unable to recover it. 00:26:36.044 [2024-11-15 12:48:16.125848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.044 [2024-11-15 12:48:16.125941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.044 [2024-11-15 12:48:16.125972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.044 [2024-11-15 12:48:16.125987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.044 [2024-11-15 12:48:16.126000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.044 [2024-11-15 12:48:16.126029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.044 qpair failed and we were unable to recover it. 00:26:36.045 [2024-11-15 12:48:16.135896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.045 [2024-11-15 12:48:16.136014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.045 [2024-11-15 12:48:16.136050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.045 [2024-11-15 12:48:16.136065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.045 [2024-11-15 12:48:16.136078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.045 [2024-11-15 12:48:16.136109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.045 qpair failed and we were unable to recover it. 00:26:36.045 [2024-11-15 12:48:16.145906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.045 [2024-11-15 12:48:16.145997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.045 [2024-11-15 12:48:16.146023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.045 [2024-11-15 12:48:16.146038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.045 [2024-11-15 12:48:16.146051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.045 [2024-11-15 12:48:16.146082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.045 qpair failed and we were unable to recover it. 00:26:36.045 [2024-11-15 12:48:16.156041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.045 [2024-11-15 12:48:16.156167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.045 [2024-11-15 12:48:16.156192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.045 [2024-11-15 12:48:16.156207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.045 [2024-11-15 12:48:16.156219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.045 [2024-11-15 12:48:16.156249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.045 qpair failed and we were unable to recover it. 00:26:36.045 [2024-11-15 12:48:16.165928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.045 [2024-11-15 12:48:16.166015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.045 [2024-11-15 12:48:16.166041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.045 [2024-11-15 12:48:16.166056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.045 [2024-11-15 12:48:16.166074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.045 [2024-11-15 12:48:16.166105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.045 qpair failed and we were unable to recover it. 00:26:36.045 [2024-11-15 12:48:16.175980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.045 [2024-11-15 12:48:16.176062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.045 [2024-11-15 12:48:16.176088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.045 [2024-11-15 12:48:16.176102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.045 [2024-11-15 12:48:16.176115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.045 [2024-11-15 12:48:16.176145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.045 qpair failed and we were unable to recover it. 00:26:36.045 [2024-11-15 12:48:16.186038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.045 [2024-11-15 12:48:16.186151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.045 [2024-11-15 12:48:16.186180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.045 [2024-11-15 12:48:16.186197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.045 [2024-11-15 12:48:16.186209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.045 [2024-11-15 12:48:16.186241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.045 qpair failed and we were unable to recover it. 00:26:36.045 [2024-11-15 12:48:16.196046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.045 [2024-11-15 12:48:16.196131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.045 [2024-11-15 12:48:16.196157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.045 [2024-11-15 12:48:16.196171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.045 [2024-11-15 12:48:16.196184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.045 [2024-11-15 12:48:16.196214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.045 qpair failed and we were unable to recover it. 00:26:36.045 [2024-11-15 12:48:16.206067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.045 [2024-11-15 12:48:16.206154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.045 [2024-11-15 12:48:16.206180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.045 [2024-11-15 12:48:16.206195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.045 [2024-11-15 12:48:16.206207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.045 [2024-11-15 12:48:16.206237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.045 qpair failed and we were unable to recover it. 00:26:36.045 [2024-11-15 12:48:16.216067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.045 [2024-11-15 12:48:16.216155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.045 [2024-11-15 12:48:16.216182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.045 [2024-11-15 12:48:16.216196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.045 [2024-11-15 12:48:16.216209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.045 [2024-11-15 12:48:16.216239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.045 qpair failed and we were unable to recover it. 00:26:36.045 [2024-11-15 12:48:16.226125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.045 [2024-11-15 12:48:16.226216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.045 [2024-11-15 12:48:16.226242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.045 [2024-11-15 12:48:16.226256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.045 [2024-11-15 12:48:16.226269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.045 [2024-11-15 12:48:16.226299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.045 qpair failed and we were unable to recover it. 00:26:36.045 [2024-11-15 12:48:16.236164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.045 [2024-11-15 12:48:16.236246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.045 [2024-11-15 12:48:16.236272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.045 [2024-11-15 12:48:16.236286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.045 [2024-11-15 12:48:16.236299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.045 [2024-11-15 12:48:16.236328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.045 qpair failed and we were unable to recover it. 00:26:36.045 [2024-11-15 12:48:16.246160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.045 [2024-11-15 12:48:16.246252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.045 [2024-11-15 12:48:16.246277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.045 [2024-11-15 12:48:16.246292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.045 [2024-11-15 12:48:16.246304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.045 [2024-11-15 12:48:16.246334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.045 qpair failed and we were unable to recover it. 00:26:36.045 [2024-11-15 12:48:16.256184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.045 [2024-11-15 12:48:16.256260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.045 [2024-11-15 12:48:16.256291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.045 [2024-11-15 12:48:16.256306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.045 [2024-11-15 12:48:16.256318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.045 [2024-11-15 12:48:16.256348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.046 qpair failed and we were unable to recover it. 00:26:36.046 [2024-11-15 12:48:16.266224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.046 [2024-11-15 12:48:16.266313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.046 [2024-11-15 12:48:16.266339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.046 [2024-11-15 12:48:16.266353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.046 [2024-11-15 12:48:16.266365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.046 [2024-11-15 12:48:16.266397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.046 qpair failed and we were unable to recover it. 00:26:36.046 [2024-11-15 12:48:16.276228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.046 [2024-11-15 12:48:16.276308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.046 [2024-11-15 12:48:16.276334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.046 [2024-11-15 12:48:16.276348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.046 [2024-11-15 12:48:16.276361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.046 [2024-11-15 12:48:16.276390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.046 qpair failed and we were unable to recover it. 00:26:36.046 [2024-11-15 12:48:16.286308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.046 [2024-11-15 12:48:16.286392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.046 [2024-11-15 12:48:16.286418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.046 [2024-11-15 12:48:16.286433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.046 [2024-11-15 12:48:16.286445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.046 [2024-11-15 12:48:16.286474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.046 qpair failed and we were unable to recover it. 00:26:36.046 [2024-11-15 12:48:16.296330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.046 [2024-11-15 12:48:16.296414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.046 [2024-11-15 12:48:16.296440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.046 [2024-11-15 12:48:16.296455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.046 [2024-11-15 12:48:16.296472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.046 [2024-11-15 12:48:16.296504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.046 qpair failed and we were unable to recover it. 00:26:36.046 [2024-11-15 12:48:16.306362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.046 [2024-11-15 12:48:16.306454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.046 [2024-11-15 12:48:16.306480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.046 [2024-11-15 12:48:16.306495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.046 [2024-11-15 12:48:16.306508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.046 [2024-11-15 12:48:16.306538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.046 qpair failed and we were unable to recover it. 00:26:36.046 [2024-11-15 12:48:16.316416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.046 [2024-11-15 12:48:16.316504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.046 [2024-11-15 12:48:16.316531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.046 [2024-11-15 12:48:16.316546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.046 [2024-11-15 12:48:16.316562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.046 [2024-11-15 12:48:16.316595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.046 qpair failed and we were unable to recover it. 00:26:36.046 [2024-11-15 12:48:16.326457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.046 [2024-11-15 12:48:16.326543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.046 [2024-11-15 12:48:16.326570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.046 [2024-11-15 12:48:16.326585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.046 [2024-11-15 12:48:16.326597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.046 [2024-11-15 12:48:16.326640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.046 qpair failed and we were unable to recover it. 00:26:36.046 [2024-11-15 12:48:16.336426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.046 [2024-11-15 12:48:16.336512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.046 [2024-11-15 12:48:16.336538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.046 [2024-11-15 12:48:16.336552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.046 [2024-11-15 12:48:16.336565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.046 [2024-11-15 12:48:16.336595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.046 qpair failed and we were unable to recover it. 00:26:36.046 [2024-11-15 12:48:16.346442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.046 [2024-11-15 12:48:16.346530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.046 [2024-11-15 12:48:16.346557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.046 [2024-11-15 12:48:16.346572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.046 [2024-11-15 12:48:16.346584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.046 [2024-11-15 12:48:16.346614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.046 qpair failed and we were unable to recover it. 00:26:36.046 [2024-11-15 12:48:16.356471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.046 [2024-11-15 12:48:16.356555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.046 [2024-11-15 12:48:16.356581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.046 [2024-11-15 12:48:16.356595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.046 [2024-11-15 12:48:16.356607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.046 [2024-11-15 12:48:16.356638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.046 qpair failed and we were unable to recover it. 00:26:36.046 [2024-11-15 12:48:16.366508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.046 [2024-11-15 12:48:16.366593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.046 [2024-11-15 12:48:16.366619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.046 [2024-11-15 12:48:16.366634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.046 [2024-11-15 12:48:16.366646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.046 [2024-11-15 12:48:16.366677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.046 qpair failed and we were unable to recover it. 00:26:36.046 [2024-11-15 12:48:16.376543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.046 [2024-11-15 12:48:16.376638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.046 [2024-11-15 12:48:16.376664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.046 [2024-11-15 12:48:16.376679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.046 [2024-11-15 12:48:16.376691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.046 [2024-11-15 12:48:16.376728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.046 qpair failed and we were unable to recover it. 00:26:36.305 [2024-11-15 12:48:16.386569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.305 [2024-11-15 12:48:16.386731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.305 [2024-11-15 12:48:16.386760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.305 [2024-11-15 12:48:16.386776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.305 [2024-11-15 12:48:16.386791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.305 [2024-11-15 12:48:16.386831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.305 qpair failed and we were unable to recover it. 00:26:36.305 [2024-11-15 12:48:16.396569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.305 [2024-11-15 12:48:16.396658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.305 [2024-11-15 12:48:16.396686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.305 [2024-11-15 12:48:16.396701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.305 [2024-11-15 12:48:16.396713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.305 [2024-11-15 12:48:16.396757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.305 qpair failed and we were unable to recover it. 00:26:36.305 [2024-11-15 12:48:16.406595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.305 [2024-11-15 12:48:16.406688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.305 [2024-11-15 12:48:16.406715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.305 [2024-11-15 12:48:16.406741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.305 [2024-11-15 12:48:16.406763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.305 [2024-11-15 12:48:16.406794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.305 qpair failed and we were unable to recover it. 00:26:36.305 [2024-11-15 12:48:16.416625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.305 [2024-11-15 12:48:16.416708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.305 [2024-11-15 12:48:16.416743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.305 [2024-11-15 12:48:16.416758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.305 [2024-11-15 12:48:16.416771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.305 [2024-11-15 12:48:16.416801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.305 qpair failed and we were unable to recover it. 00:26:36.305 [2024-11-15 12:48:16.426695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.305 [2024-11-15 12:48:16.426794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.305 [2024-11-15 12:48:16.426819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.305 [2024-11-15 12:48:16.426842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.305 [2024-11-15 12:48:16.426856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.305 [2024-11-15 12:48:16.426886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.305 qpair failed and we were unable to recover it. 00:26:36.305 [2024-11-15 12:48:16.436733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.306 [2024-11-15 12:48:16.436829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.306 [2024-11-15 12:48:16.436855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.306 [2024-11-15 12:48:16.436870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.306 [2024-11-15 12:48:16.436882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.306 [2024-11-15 12:48:16.436912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.306 qpair failed and we were unable to recover it. 00:26:36.306 [2024-11-15 12:48:16.446708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.306 [2024-11-15 12:48:16.446811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.306 [2024-11-15 12:48:16.446837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.306 [2024-11-15 12:48:16.446851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.306 [2024-11-15 12:48:16.446864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.306 [2024-11-15 12:48:16.446894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.306 qpair failed and we were unable to recover it. 00:26:36.306 [2024-11-15 12:48:16.456752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.306 [2024-11-15 12:48:16.456838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.306 [2024-11-15 12:48:16.456864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.306 [2024-11-15 12:48:16.456879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.306 [2024-11-15 12:48:16.456891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.306 [2024-11-15 12:48:16.456921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.306 qpair failed and we were unable to recover it. 00:26:36.306 [2024-11-15 12:48:16.466823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.306 [2024-11-15 12:48:16.466919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.306 [2024-11-15 12:48:16.466948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.306 [2024-11-15 12:48:16.466965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.306 [2024-11-15 12:48:16.466977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.306 [2024-11-15 12:48:16.467014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.306 qpair failed and we were unable to recover it. 00:26:36.306 [2024-11-15 12:48:16.476829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.306 [2024-11-15 12:48:16.476916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.306 [2024-11-15 12:48:16.476942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.306 [2024-11-15 12:48:16.476957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.306 [2024-11-15 12:48:16.476969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.306 [2024-11-15 12:48:16.476999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.306 qpair failed and we were unable to recover it. 00:26:36.306 [2024-11-15 12:48:16.486853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.306 [2024-11-15 12:48:16.486931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.306 [2024-11-15 12:48:16.486957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.306 [2024-11-15 12:48:16.486971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.306 [2024-11-15 12:48:16.486984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.306 [2024-11-15 12:48:16.487014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.306 qpair failed and we were unable to recover it. 00:26:36.306 [2024-11-15 12:48:16.496850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.306 [2024-11-15 12:48:16.496936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.306 [2024-11-15 12:48:16.496961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.306 [2024-11-15 12:48:16.496975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.306 [2024-11-15 12:48:16.496988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.306 [2024-11-15 12:48:16.497017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.306 qpair failed and we were unable to recover it. 00:26:36.306 [2024-11-15 12:48:16.506905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.306 [2024-11-15 12:48:16.506994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.306 [2024-11-15 12:48:16.507020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.306 [2024-11-15 12:48:16.507034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.306 [2024-11-15 12:48:16.507046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.306 [2024-11-15 12:48:16.507076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.306 qpair failed and we were unable to recover it. 00:26:36.306 [2024-11-15 12:48:16.516943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.306 [2024-11-15 12:48:16.517035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.306 [2024-11-15 12:48:16.517061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.306 [2024-11-15 12:48:16.517075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.306 [2024-11-15 12:48:16.517088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.306 [2024-11-15 12:48:16.517117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.306 qpair failed and we were unable to recover it. 00:26:36.306 [2024-11-15 12:48:16.526953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.306 [2024-11-15 12:48:16.527048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.306 [2024-11-15 12:48:16.527074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.306 [2024-11-15 12:48:16.527089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.306 [2024-11-15 12:48:16.527101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.306 [2024-11-15 12:48:16.527131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.306 qpair failed and we were unable to recover it. 00:26:36.306 [2024-11-15 12:48:16.537026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.306 [2024-11-15 12:48:16.537109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.306 [2024-11-15 12:48:16.537135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.306 [2024-11-15 12:48:16.537149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.306 [2024-11-15 12:48:16.537161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.306 [2024-11-15 12:48:16.537191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.306 qpair failed and we were unable to recover it. 00:26:36.306 [2024-11-15 12:48:16.547049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.306 [2024-11-15 12:48:16.547163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.306 [2024-11-15 12:48:16.547189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.306 [2024-11-15 12:48:16.547203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.306 [2024-11-15 12:48:16.547215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.306 [2024-11-15 12:48:16.547245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.306 qpair failed and we were unable to recover it. 00:26:36.306 [2024-11-15 12:48:16.557064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.306 [2024-11-15 12:48:16.557148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.306 [2024-11-15 12:48:16.557178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.306 [2024-11-15 12:48:16.557194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.306 [2024-11-15 12:48:16.557207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.306 [2024-11-15 12:48:16.557237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.306 qpair failed and we were unable to recover it. 00:26:36.306 [2024-11-15 12:48:16.567157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.307 [2024-11-15 12:48:16.567284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.307 [2024-11-15 12:48:16.567309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.307 [2024-11-15 12:48:16.567323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.307 [2024-11-15 12:48:16.567335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.307 [2024-11-15 12:48:16.567366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.307 qpair failed and we were unable to recover it. 00:26:36.307 [2024-11-15 12:48:16.577086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.307 [2024-11-15 12:48:16.577161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.307 [2024-11-15 12:48:16.577187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.307 [2024-11-15 12:48:16.577201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.307 [2024-11-15 12:48:16.577213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.307 [2024-11-15 12:48:16.577242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.307 qpair failed and we were unable to recover it. 00:26:36.307 [2024-11-15 12:48:16.587170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.307 [2024-11-15 12:48:16.587300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.307 [2024-11-15 12:48:16.587326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.307 [2024-11-15 12:48:16.587340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.307 [2024-11-15 12:48:16.587352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.307 [2024-11-15 12:48:16.587382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.307 qpair failed and we were unable to recover it. 00:26:36.307 [2024-11-15 12:48:16.597180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.307 [2024-11-15 12:48:16.597270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.307 [2024-11-15 12:48:16.597295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.307 [2024-11-15 12:48:16.597310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.307 [2024-11-15 12:48:16.597322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.307 [2024-11-15 12:48:16.597357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.307 qpair failed and we were unable to recover it. 00:26:36.307 [2024-11-15 12:48:16.607212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.307 [2024-11-15 12:48:16.607293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.307 [2024-11-15 12:48:16.607319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.307 [2024-11-15 12:48:16.607333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.307 [2024-11-15 12:48:16.607346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.307 [2024-11-15 12:48:16.607375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.307 qpair failed and we were unable to recover it. 00:26:36.307 [2024-11-15 12:48:16.617229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.307 [2024-11-15 12:48:16.617308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.307 [2024-11-15 12:48:16.617335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.307 [2024-11-15 12:48:16.617349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.307 [2024-11-15 12:48:16.617361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.307 [2024-11-15 12:48:16.617390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.307 qpair failed and we were unable to recover it. 00:26:36.307 [2024-11-15 12:48:16.627365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.307 [2024-11-15 12:48:16.627464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.307 [2024-11-15 12:48:16.627490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.307 [2024-11-15 12:48:16.627504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.307 [2024-11-15 12:48:16.627517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.307 [2024-11-15 12:48:16.627547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.307 qpair failed and we were unable to recover it. 00:26:36.307 [2024-11-15 12:48:16.637383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.307 [2024-11-15 12:48:16.637480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.307 [2024-11-15 12:48:16.637512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.307 [2024-11-15 12:48:16.637531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.307 [2024-11-15 12:48:16.637544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.307 [2024-11-15 12:48:16.637575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.307 qpair failed and we were unable to recover it. 00:26:36.566 [2024-11-15 12:48:16.647295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.566 [2024-11-15 12:48:16.647383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.566 [2024-11-15 12:48:16.647410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.566 [2024-11-15 12:48:16.647425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.566 [2024-11-15 12:48:16.647437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.566 [2024-11-15 12:48:16.647468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.566 qpair failed and we were unable to recover it. 00:26:36.566 [2024-11-15 12:48:16.657416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.566 [2024-11-15 12:48:16.657496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.566 [2024-11-15 12:48:16.657522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.566 [2024-11-15 12:48:16.657537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.566 [2024-11-15 12:48:16.657549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.566 [2024-11-15 12:48:16.657580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.566 qpair failed and we were unable to recover it. 00:26:36.566 [2024-11-15 12:48:16.667402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.566 [2024-11-15 12:48:16.667490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.566 [2024-11-15 12:48:16.667517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.566 [2024-11-15 12:48:16.667532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.566 [2024-11-15 12:48:16.667545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.566 [2024-11-15 12:48:16.667575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.566 qpair failed and we were unable to recover it. 00:26:36.566 [2024-11-15 12:48:16.677406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.566 [2024-11-15 12:48:16.677488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.566 [2024-11-15 12:48:16.677515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.566 [2024-11-15 12:48:16.677529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.566 [2024-11-15 12:48:16.677542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.566 [2024-11-15 12:48:16.677572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.566 qpair failed and we were unable to recover it. 00:26:36.566 [2024-11-15 12:48:16.687498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.566 [2024-11-15 12:48:16.687588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.566 [2024-11-15 12:48:16.687625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.566 [2024-11-15 12:48:16.687641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.566 [2024-11-15 12:48:16.687653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.566 [2024-11-15 12:48:16.687683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.566 qpair failed and we were unable to recover it. 00:26:36.566 [2024-11-15 12:48:16.697456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.566 [2024-11-15 12:48:16.697540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.566 [2024-11-15 12:48:16.697566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.566 [2024-11-15 12:48:16.697581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.566 [2024-11-15 12:48:16.697593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.566 [2024-11-15 12:48:16.697622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.566 qpair failed and we were unable to recover it. 00:26:36.567 [2024-11-15 12:48:16.707522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.567 [2024-11-15 12:48:16.707629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.567 [2024-11-15 12:48:16.707655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.567 [2024-11-15 12:48:16.707670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.567 [2024-11-15 12:48:16.707683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.567 [2024-11-15 12:48:16.707731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.567 qpair failed and we were unable to recover it. 00:26:36.567 [2024-11-15 12:48:16.717501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.567 [2024-11-15 12:48:16.717594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.567 [2024-11-15 12:48:16.717620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.567 [2024-11-15 12:48:16.717634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.567 [2024-11-15 12:48:16.717647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.567 [2024-11-15 12:48:16.717677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.567 qpair failed and we were unable to recover it. 00:26:36.567 [2024-11-15 12:48:16.727548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.567 [2024-11-15 12:48:16.727635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.567 [2024-11-15 12:48:16.727660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.567 [2024-11-15 12:48:16.727675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.567 [2024-11-15 12:48:16.727693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.567 [2024-11-15 12:48:16.727731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.567 qpair failed and we were unable to recover it. 00:26:36.567 [2024-11-15 12:48:16.737581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.567 [2024-11-15 12:48:16.737671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.567 [2024-11-15 12:48:16.737697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.567 [2024-11-15 12:48:16.737712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.567 [2024-11-15 12:48:16.737733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.567 [2024-11-15 12:48:16.737765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.567 qpair failed and we were unable to recover it. 00:26:36.567 [2024-11-15 12:48:16.747654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.567 [2024-11-15 12:48:16.747758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.567 [2024-11-15 12:48:16.747784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.567 [2024-11-15 12:48:16.747798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.567 [2024-11-15 12:48:16.747811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.567 [2024-11-15 12:48:16.747841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.567 qpair failed and we were unable to recover it. 00:26:36.567 [2024-11-15 12:48:16.757633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.567 [2024-11-15 12:48:16.757730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.567 [2024-11-15 12:48:16.757760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.567 [2024-11-15 12:48:16.757776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.567 [2024-11-15 12:48:16.757788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.567 [2024-11-15 12:48:16.757820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.567 qpair failed and we were unable to recover it. 00:26:36.567 [2024-11-15 12:48:16.767673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.567 [2024-11-15 12:48:16.767807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.567 [2024-11-15 12:48:16.767834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.567 [2024-11-15 12:48:16.767849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.567 [2024-11-15 12:48:16.767861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.567 [2024-11-15 12:48:16.767892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.567 qpair failed and we were unable to recover it. 00:26:36.567 [2024-11-15 12:48:16.777671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.567 [2024-11-15 12:48:16.777759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.567 [2024-11-15 12:48:16.777786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.567 [2024-11-15 12:48:16.777800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.567 [2024-11-15 12:48:16.777812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.567 [2024-11-15 12:48:16.777842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.567 qpair failed and we were unable to recover it. 00:26:36.567 [2024-11-15 12:48:16.787828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.567 [2024-11-15 12:48:16.787923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.567 [2024-11-15 12:48:16.787949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.567 [2024-11-15 12:48:16.787963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.567 [2024-11-15 12:48:16.787975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.567 [2024-11-15 12:48:16.788005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.567 qpair failed and we were unable to recover it. 00:26:36.567 [2024-11-15 12:48:16.797740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.567 [2024-11-15 12:48:16.797828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.567 [2024-11-15 12:48:16.797854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.567 [2024-11-15 12:48:16.797868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.567 [2024-11-15 12:48:16.797881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.567 [2024-11-15 12:48:16.797911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.567 qpair failed and we were unable to recover it. 00:26:36.567 [2024-11-15 12:48:16.807805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.567 [2024-11-15 12:48:16.807917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.567 [2024-11-15 12:48:16.807943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.567 [2024-11-15 12:48:16.807958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.567 [2024-11-15 12:48:16.807971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.567 [2024-11-15 12:48:16.808000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.567 qpair failed and we were unable to recover it. 00:26:36.567 [2024-11-15 12:48:16.817821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.567 [2024-11-15 12:48:16.817906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.567 [2024-11-15 12:48:16.817937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.567 [2024-11-15 12:48:16.817952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.567 [2024-11-15 12:48:16.817965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.567 [2024-11-15 12:48:16.817995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.567 qpair failed and we were unable to recover it. 00:26:36.567 [2024-11-15 12:48:16.827872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.567 [2024-11-15 12:48:16.827958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.567 [2024-11-15 12:48:16.827984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.567 [2024-11-15 12:48:16.827999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.567 [2024-11-15 12:48:16.828011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.567 [2024-11-15 12:48:16.828041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.567 qpair failed and we were unable to recover it. 00:26:36.567 [2024-11-15 12:48:16.837885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.568 [2024-11-15 12:48:16.837967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.568 [2024-11-15 12:48:16.837995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.568 [2024-11-15 12:48:16.838013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.568 [2024-11-15 12:48:16.838026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.568 [2024-11-15 12:48:16.838069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.568 qpair failed and we were unable to recover it. 00:26:36.568 [2024-11-15 12:48:16.847873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.568 [2024-11-15 12:48:16.847956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.568 [2024-11-15 12:48:16.847982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.568 [2024-11-15 12:48:16.847996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.568 [2024-11-15 12:48:16.848009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.568 [2024-11-15 12:48:16.848039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.568 qpair failed and we were unable to recover it. 00:26:36.568 [2024-11-15 12:48:16.857936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.568 [2024-11-15 12:48:16.858021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.568 [2024-11-15 12:48:16.858047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.568 [2024-11-15 12:48:16.858067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.568 [2024-11-15 12:48:16.858080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.568 [2024-11-15 12:48:16.858111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.568 qpair failed and we were unable to recover it. 00:26:36.568 [2024-11-15 12:48:16.867981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.568 [2024-11-15 12:48:16.868070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.568 [2024-11-15 12:48:16.868095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.568 [2024-11-15 12:48:16.868110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.568 [2024-11-15 12:48:16.868122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.568 [2024-11-15 12:48:16.868152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.568 qpair failed and we were unable to recover it. 00:26:36.568 [2024-11-15 12:48:16.877963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.568 [2024-11-15 12:48:16.878052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.568 [2024-11-15 12:48:16.878078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.568 [2024-11-15 12:48:16.878092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.568 [2024-11-15 12:48:16.878105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.568 [2024-11-15 12:48:16.878136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.568 qpair failed and we were unable to recover it. 00:26:36.568 [2024-11-15 12:48:16.888022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.568 [2024-11-15 12:48:16.888114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.568 [2024-11-15 12:48:16.888146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.568 [2024-11-15 12:48:16.888164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.568 [2024-11-15 12:48:16.888176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.568 [2024-11-15 12:48:16.888208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.568 qpair failed and we were unable to recover it. 00:26:36.568 [2024-11-15 12:48:16.898018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.568 [2024-11-15 12:48:16.898116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.568 [2024-11-15 12:48:16.898144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.568 [2024-11-15 12:48:16.898159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.568 [2024-11-15 12:48:16.898171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.568 [2024-11-15 12:48:16.898202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.568 qpair failed and we were unable to recover it. 00:26:36.828 [2024-11-15 12:48:16.908071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.828 [2024-11-15 12:48:16.908162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.828 [2024-11-15 12:48:16.908188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.828 [2024-11-15 12:48:16.908202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.828 [2024-11-15 12:48:16.908215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.828 [2024-11-15 12:48:16.908245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.828 qpair failed and we were unable to recover it. 00:26:36.828 [2024-11-15 12:48:16.918138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.828 [2024-11-15 12:48:16.918228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.828 [2024-11-15 12:48:16.918254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.828 [2024-11-15 12:48:16.918268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.828 [2024-11-15 12:48:16.918281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.828 [2024-11-15 12:48:16.918310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.828 qpair failed and we were unable to recover it. 00:26:36.828 [2024-11-15 12:48:16.928146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.828 [2024-11-15 12:48:16.928270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.828 [2024-11-15 12:48:16.928295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.828 [2024-11-15 12:48:16.928310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.828 [2024-11-15 12:48:16.928322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.828 [2024-11-15 12:48:16.928352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.828 qpair failed and we were unable to recover it. 00:26:36.828 [2024-11-15 12:48:16.938147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.828 [2024-11-15 12:48:16.938227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.828 [2024-11-15 12:48:16.938253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.828 [2024-11-15 12:48:16.938267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.828 [2024-11-15 12:48:16.938280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.828 [2024-11-15 12:48:16.938309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.828 qpair failed and we were unable to recover it. 00:26:36.828 [2024-11-15 12:48:16.948189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.828 [2024-11-15 12:48:16.948291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.828 [2024-11-15 12:48:16.948317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.828 [2024-11-15 12:48:16.948332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.828 [2024-11-15 12:48:16.948344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.828 [2024-11-15 12:48:16.948373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.828 qpair failed and we were unable to recover it. 00:26:36.828 [2024-11-15 12:48:16.958197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.828 [2024-11-15 12:48:16.958279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.828 [2024-11-15 12:48:16.958305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.828 [2024-11-15 12:48:16.958319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.828 [2024-11-15 12:48:16.958332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.828 [2024-11-15 12:48:16.958361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.828 qpair failed and we were unable to recover it. 00:26:36.828 [2024-11-15 12:48:16.968241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.828 [2024-11-15 12:48:16.968329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.828 [2024-11-15 12:48:16.968355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.828 [2024-11-15 12:48:16.968370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.828 [2024-11-15 12:48:16.968382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.828 [2024-11-15 12:48:16.968412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.828 qpair failed and we were unable to recover it. 00:26:36.828 [2024-11-15 12:48:16.978276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.828 [2024-11-15 12:48:16.978363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.828 [2024-11-15 12:48:16.978389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.828 [2024-11-15 12:48:16.978404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.828 [2024-11-15 12:48:16.978416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.828 [2024-11-15 12:48:16.978446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.828 qpair failed and we were unable to recover it. 00:26:36.828 [2024-11-15 12:48:16.988335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.828 [2024-11-15 12:48:16.988432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.828 [2024-11-15 12:48:16.988458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.828 [2024-11-15 12:48:16.988478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.828 [2024-11-15 12:48:16.988491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.828 [2024-11-15 12:48:16.988521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.828 qpair failed and we were unable to recover it. 00:26:36.828 [2024-11-15 12:48:16.998313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.828 [2024-11-15 12:48:16.998411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.828 [2024-11-15 12:48:16.998437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.828 [2024-11-15 12:48:16.998452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.828 [2024-11-15 12:48:16.998464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.828 [2024-11-15 12:48:16.998495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.828 qpair failed and we were unable to recover it. 00:26:36.828 [2024-11-15 12:48:17.008365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.828 [2024-11-15 12:48:17.008447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.828 [2024-11-15 12:48:17.008472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.828 [2024-11-15 12:48:17.008486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.828 [2024-11-15 12:48:17.008499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.828 [2024-11-15 12:48:17.008529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.829 qpair failed and we were unable to recover it. 00:26:36.829 [2024-11-15 12:48:17.018433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.829 [2024-11-15 12:48:17.018537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.829 [2024-11-15 12:48:17.018562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.829 [2024-11-15 12:48:17.018577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.829 [2024-11-15 12:48:17.018589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.829 [2024-11-15 12:48:17.018619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.829 qpair failed and we were unable to recover it. 00:26:36.829 [2024-11-15 12:48:17.028416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.829 [2024-11-15 12:48:17.028523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.829 [2024-11-15 12:48:17.028549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.829 [2024-11-15 12:48:17.028563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.829 [2024-11-15 12:48:17.028576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.829 [2024-11-15 12:48:17.028612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.829 qpair failed and we were unable to recover it. 00:26:36.829 [2024-11-15 12:48:17.038458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.829 [2024-11-15 12:48:17.038538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.829 [2024-11-15 12:48:17.038564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.829 [2024-11-15 12:48:17.038578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.829 [2024-11-15 12:48:17.038590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.829 [2024-11-15 12:48:17.038620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.829 qpair failed and we were unable to recover it. 00:26:36.829 [2024-11-15 12:48:17.048464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.829 [2024-11-15 12:48:17.048551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.829 [2024-11-15 12:48:17.048577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.829 [2024-11-15 12:48:17.048592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.829 [2024-11-15 12:48:17.048604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.829 [2024-11-15 12:48:17.048634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.829 qpair failed and we were unable to recover it. 00:26:36.829 [2024-11-15 12:48:17.058491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.829 [2024-11-15 12:48:17.058607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.829 [2024-11-15 12:48:17.058633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.829 [2024-11-15 12:48:17.058648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.829 [2024-11-15 12:48:17.058660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.829 [2024-11-15 12:48:17.058690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.829 qpair failed and we were unable to recover it. 00:26:36.829 [2024-11-15 12:48:17.068533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.829 [2024-11-15 12:48:17.068622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.829 [2024-11-15 12:48:17.068648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.829 [2024-11-15 12:48:17.068663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.829 [2024-11-15 12:48:17.068675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.829 [2024-11-15 12:48:17.068705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.829 qpair failed and we were unable to recover it. 00:26:36.829 [2024-11-15 12:48:17.078633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.829 [2024-11-15 12:48:17.078730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.829 [2024-11-15 12:48:17.078756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.829 [2024-11-15 12:48:17.078771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.829 [2024-11-15 12:48:17.078783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.829 [2024-11-15 12:48:17.078813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.829 qpair failed and we were unable to recover it. 00:26:36.829 [2024-11-15 12:48:17.088653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.829 [2024-11-15 12:48:17.088753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.829 [2024-11-15 12:48:17.088780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.829 [2024-11-15 12:48:17.088795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.829 [2024-11-15 12:48:17.088807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.829 [2024-11-15 12:48:17.088837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.829 qpair failed and we were unable to recover it. 00:26:36.829 [2024-11-15 12:48:17.098618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.829 [2024-11-15 12:48:17.098703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.829 [2024-11-15 12:48:17.098736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.829 [2024-11-15 12:48:17.098751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.829 [2024-11-15 12:48:17.098763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.829 [2024-11-15 12:48:17.098794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.829 qpair failed and we were unable to recover it. 00:26:36.829 [2024-11-15 12:48:17.108650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.829 [2024-11-15 12:48:17.108744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.829 [2024-11-15 12:48:17.108770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.829 [2024-11-15 12:48:17.108784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.829 [2024-11-15 12:48:17.108797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.829 [2024-11-15 12:48:17.108827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.829 qpair failed and we were unable to recover it. 00:26:36.829 [2024-11-15 12:48:17.118695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.829 [2024-11-15 12:48:17.118826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.829 [2024-11-15 12:48:17.118859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.829 [2024-11-15 12:48:17.118875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.829 [2024-11-15 12:48:17.118889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.829 [2024-11-15 12:48:17.118920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.829 qpair failed and we were unable to recover it. 00:26:36.829 [2024-11-15 12:48:17.128726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.829 [2024-11-15 12:48:17.128831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.829 [2024-11-15 12:48:17.128856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.829 [2024-11-15 12:48:17.128871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.829 [2024-11-15 12:48:17.128883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.830 [2024-11-15 12:48:17.128913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.830 qpair failed and we were unable to recover it. 00:26:36.830 [2024-11-15 12:48:17.138742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.830 [2024-11-15 12:48:17.138849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.830 [2024-11-15 12:48:17.138879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.830 [2024-11-15 12:48:17.138895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.830 [2024-11-15 12:48:17.138908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.830 [2024-11-15 12:48:17.138940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.830 qpair failed and we were unable to recover it. 00:26:36.830 [2024-11-15 12:48:17.148795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.830 [2024-11-15 12:48:17.148887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.830 [2024-11-15 12:48:17.148915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.830 [2024-11-15 12:48:17.148929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.830 [2024-11-15 12:48:17.148942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.830 [2024-11-15 12:48:17.148973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.830 qpair failed and we were unable to recover it. 00:26:36.830 [2024-11-15 12:48:17.158782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.830 [2024-11-15 12:48:17.158901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.830 [2024-11-15 12:48:17.158927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.830 [2024-11-15 12:48:17.158942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.830 [2024-11-15 12:48:17.158959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.830 [2024-11-15 12:48:17.158991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.830 qpair failed and we were unable to recover it. 00:26:36.830 [2024-11-15 12:48:17.168806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.830 [2024-11-15 12:48:17.168911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.830 [2024-11-15 12:48:17.168937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.830 [2024-11-15 12:48:17.168952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.830 [2024-11-15 12:48:17.168964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:36.830 [2024-11-15 12:48:17.168995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:36.830 qpair failed and we were unable to recover it. 00:26:37.089 [2024-11-15 12:48:17.178821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.089 [2024-11-15 12:48:17.178900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.089 [2024-11-15 12:48:17.178926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.089 [2024-11-15 12:48:17.178940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.089 [2024-11-15 12:48:17.178953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.089 [2024-11-15 12:48:17.178983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.089 qpair failed and we were unable to recover it. 00:26:37.089 [2024-11-15 12:48:17.188950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.089 [2024-11-15 12:48:17.189041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.089 [2024-11-15 12:48:17.189067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.089 [2024-11-15 12:48:17.189082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.089 [2024-11-15 12:48:17.189094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.089 [2024-11-15 12:48:17.189125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.089 qpair failed and we were unable to recover it. 00:26:37.089 [2024-11-15 12:48:17.198881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.089 [2024-11-15 12:48:17.199002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.089 [2024-11-15 12:48:17.199027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.089 [2024-11-15 12:48:17.199042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.089 [2024-11-15 12:48:17.199055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.089 [2024-11-15 12:48:17.199084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.089 qpair failed and we were unable to recover it. 00:26:37.089 [2024-11-15 12:48:17.208931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.089 [2024-11-15 12:48:17.209016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.089 [2024-11-15 12:48:17.209042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.089 [2024-11-15 12:48:17.209056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.089 [2024-11-15 12:48:17.209069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.089 [2024-11-15 12:48:17.209098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.089 qpair failed and we were unable to recover it. 00:26:37.089 [2024-11-15 12:48:17.218964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.089 [2024-11-15 12:48:17.219047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.089 [2024-11-15 12:48:17.219072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.089 [2024-11-15 12:48:17.219087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.090 [2024-11-15 12:48:17.219099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.090 [2024-11-15 12:48:17.219130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.090 qpair failed and we were unable to recover it. 00:26:37.090 [2024-11-15 12:48:17.229013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.090 [2024-11-15 12:48:17.229103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.090 [2024-11-15 12:48:17.229129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.090 [2024-11-15 12:48:17.229143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.090 [2024-11-15 12:48:17.229155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.090 [2024-11-15 12:48:17.229185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.090 qpair failed and we were unable to recover it. 00:26:37.090 [2024-11-15 12:48:17.239114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.090 [2024-11-15 12:48:17.239197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.090 [2024-11-15 12:48:17.239222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.090 [2024-11-15 12:48:17.239236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.090 [2024-11-15 12:48:17.239249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.090 [2024-11-15 12:48:17.239279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.090 qpair failed and we were unable to recover it. 00:26:37.090 [2024-11-15 12:48:17.249050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.090 [2024-11-15 12:48:17.249132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.090 [2024-11-15 12:48:17.249164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.090 [2024-11-15 12:48:17.249180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.090 [2024-11-15 12:48:17.249192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.090 [2024-11-15 12:48:17.249222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.090 qpair failed and we were unable to recover it. 00:26:37.090 [2024-11-15 12:48:17.259048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.090 [2024-11-15 12:48:17.259140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.090 [2024-11-15 12:48:17.259167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.090 [2024-11-15 12:48:17.259181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.090 [2024-11-15 12:48:17.259193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.090 [2024-11-15 12:48:17.259222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.090 qpair failed and we were unable to recover it. 00:26:37.090 [2024-11-15 12:48:17.269119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.090 [2024-11-15 12:48:17.269206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.090 [2024-11-15 12:48:17.269231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.090 [2024-11-15 12:48:17.269246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.090 [2024-11-15 12:48:17.269258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.090 [2024-11-15 12:48:17.269287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.090 qpair failed and we were unable to recover it. 00:26:37.090 [2024-11-15 12:48:17.279123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.090 [2024-11-15 12:48:17.279201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.090 [2024-11-15 12:48:17.279226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.090 [2024-11-15 12:48:17.279241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.090 [2024-11-15 12:48:17.279253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.090 [2024-11-15 12:48:17.279283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.090 qpair failed and we were unable to recover it. 00:26:37.090 [2024-11-15 12:48:17.289167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.090 [2024-11-15 12:48:17.289246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.090 [2024-11-15 12:48:17.289271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.090 [2024-11-15 12:48:17.289286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.090 [2024-11-15 12:48:17.289304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.090 [2024-11-15 12:48:17.289335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.090 qpair failed and we were unable to recover it. 00:26:37.090 [2024-11-15 12:48:17.299200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.090 [2024-11-15 12:48:17.299323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.090 [2024-11-15 12:48:17.299348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.090 [2024-11-15 12:48:17.299363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.090 [2024-11-15 12:48:17.299375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.090 [2024-11-15 12:48:17.299405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.090 qpair failed and we were unable to recover it. 00:26:37.090 [2024-11-15 12:48:17.309247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.090 [2024-11-15 12:48:17.309330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.090 [2024-11-15 12:48:17.309356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.090 [2024-11-15 12:48:17.309370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.090 [2024-11-15 12:48:17.309383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.090 [2024-11-15 12:48:17.309413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.090 qpair failed and we were unable to recover it. 00:26:37.090 [2024-11-15 12:48:17.319269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.090 [2024-11-15 12:48:17.319357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.090 [2024-11-15 12:48:17.319382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.090 [2024-11-15 12:48:17.319397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.090 [2024-11-15 12:48:17.319409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.090 [2024-11-15 12:48:17.319439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.090 qpair failed and we were unable to recover it. 00:26:37.090 [2024-11-15 12:48:17.329284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.090 [2024-11-15 12:48:17.329363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.090 [2024-11-15 12:48:17.329389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.090 [2024-11-15 12:48:17.329403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.090 [2024-11-15 12:48:17.329415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.091 [2024-11-15 12:48:17.329445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.091 qpair failed and we were unable to recover it. 00:26:37.091 [2024-11-15 12:48:17.339296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.091 [2024-11-15 12:48:17.339377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.091 [2024-11-15 12:48:17.339403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.091 [2024-11-15 12:48:17.339417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.091 [2024-11-15 12:48:17.339429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.091 [2024-11-15 12:48:17.339459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.091 qpair failed and we were unable to recover it. 00:26:37.091 [2024-11-15 12:48:17.349358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.091 [2024-11-15 12:48:17.349452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.091 [2024-11-15 12:48:17.349477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.091 [2024-11-15 12:48:17.349492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.091 [2024-11-15 12:48:17.349504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.091 [2024-11-15 12:48:17.349546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.091 qpair failed and we were unable to recover it. 00:26:37.091 [2024-11-15 12:48:17.359473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.091 [2024-11-15 12:48:17.359573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.091 [2024-11-15 12:48:17.359599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.091 [2024-11-15 12:48:17.359614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.091 [2024-11-15 12:48:17.359626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.091 [2024-11-15 12:48:17.359655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.091 qpair failed and we were unable to recover it. 00:26:37.091 [2024-11-15 12:48:17.369387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.091 [2024-11-15 12:48:17.369475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.091 [2024-11-15 12:48:17.369501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.091 [2024-11-15 12:48:17.369515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.091 [2024-11-15 12:48:17.369527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.091 [2024-11-15 12:48:17.369557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.091 qpair failed and we were unable to recover it. 00:26:37.091 [2024-11-15 12:48:17.379412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.091 [2024-11-15 12:48:17.379545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.091 [2024-11-15 12:48:17.379576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.091 [2024-11-15 12:48:17.379592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.091 [2024-11-15 12:48:17.379604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.091 [2024-11-15 12:48:17.379634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.091 qpair failed and we were unable to recover it. 00:26:37.091 [2024-11-15 12:48:17.389444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.091 [2024-11-15 12:48:17.389547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.091 [2024-11-15 12:48:17.389578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.091 [2024-11-15 12:48:17.389595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.091 [2024-11-15 12:48:17.389608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.091 [2024-11-15 12:48:17.389639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.091 qpair failed and we were unable to recover it. 00:26:37.091 [2024-11-15 12:48:17.399483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.091 [2024-11-15 12:48:17.399581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.091 [2024-11-15 12:48:17.399608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.091 [2024-11-15 12:48:17.399623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.091 [2024-11-15 12:48:17.399635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.091 [2024-11-15 12:48:17.399666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.091 qpair failed and we were unable to recover it. 00:26:37.091 [2024-11-15 12:48:17.409493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.091 [2024-11-15 12:48:17.409583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.091 [2024-11-15 12:48:17.409609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.091 [2024-11-15 12:48:17.409624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.091 [2024-11-15 12:48:17.409637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.091 [2024-11-15 12:48:17.409667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.091 qpair failed and we were unable to recover it. 00:26:37.091 [2024-11-15 12:48:17.419655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.091 [2024-11-15 12:48:17.419789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.091 [2024-11-15 12:48:17.419815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.091 [2024-11-15 12:48:17.419836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.091 [2024-11-15 12:48:17.419849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.091 [2024-11-15 12:48:17.419879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.091 qpair failed and we were unable to recover it. 00:26:37.091 [2024-11-15 12:48:17.429582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.091 [2024-11-15 12:48:17.429700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.091 [2024-11-15 12:48:17.429732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.091 [2024-11-15 12:48:17.429747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.091 [2024-11-15 12:48:17.429760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.091 [2024-11-15 12:48:17.429789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.091 qpair failed and we were unable to recover it. 00:26:37.358 [2024-11-15 12:48:17.439586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.358 [2024-11-15 12:48:17.439703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.358 [2024-11-15 12:48:17.439739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.358 [2024-11-15 12:48:17.439755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.358 [2024-11-15 12:48:17.439767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.358 [2024-11-15 12:48:17.439797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.358 qpair failed and we were unable to recover it. 00:26:37.358 [2024-11-15 12:48:17.449669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.358 [2024-11-15 12:48:17.449812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.358 [2024-11-15 12:48:17.449839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.358 [2024-11-15 12:48:17.449853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.359 [2024-11-15 12:48:17.449865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.359 [2024-11-15 12:48:17.449896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.359 qpair failed and we were unable to recover it. 00:26:37.359 [2024-11-15 12:48:17.459666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.359 [2024-11-15 12:48:17.459783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.359 [2024-11-15 12:48:17.459809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.359 [2024-11-15 12:48:17.459824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.359 [2024-11-15 12:48:17.459837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.359 [2024-11-15 12:48:17.459868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.359 qpair failed and we were unable to recover it. 00:26:37.359 [2024-11-15 12:48:17.469685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.359 [2024-11-15 12:48:17.469821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.359 [2024-11-15 12:48:17.469855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.359 [2024-11-15 12:48:17.469872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.359 [2024-11-15 12:48:17.469886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.359 [2024-11-15 12:48:17.469916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.359 qpair failed and we were unable to recover it. 00:26:37.359 [2024-11-15 12:48:17.479693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.359 [2024-11-15 12:48:17.479784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.359 [2024-11-15 12:48:17.479809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.359 [2024-11-15 12:48:17.479824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.359 [2024-11-15 12:48:17.479837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.360 [2024-11-15 12:48:17.479867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.360 qpair failed and we were unable to recover it. 00:26:37.360 [2024-11-15 12:48:17.489701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.360 [2024-11-15 12:48:17.489800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.360 [2024-11-15 12:48:17.489826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.360 [2024-11-15 12:48:17.489841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.360 [2024-11-15 12:48:17.489854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.360 [2024-11-15 12:48:17.489883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.360 qpair failed and we were unable to recover it. 00:26:37.360 [2024-11-15 12:48:17.499752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.360 [2024-11-15 12:48:17.499835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.360 [2024-11-15 12:48:17.499861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.360 [2024-11-15 12:48:17.499875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.360 [2024-11-15 12:48:17.499887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.361 [2024-11-15 12:48:17.499918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.361 qpair failed and we were unable to recover it. 00:26:37.361 [2024-11-15 12:48:17.509806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.361 [2024-11-15 12:48:17.509901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.361 [2024-11-15 12:48:17.509926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.361 [2024-11-15 12:48:17.509940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.361 [2024-11-15 12:48:17.509952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.361 [2024-11-15 12:48:17.509982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.361 qpair failed and we were unable to recover it. 00:26:37.361 [2024-11-15 12:48:17.519824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.361 [2024-11-15 12:48:17.519908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.361 [2024-11-15 12:48:17.519934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.361 [2024-11-15 12:48:17.519949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.361 [2024-11-15 12:48:17.519961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.361 [2024-11-15 12:48:17.519991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.361 qpair failed and we were unable to recover it. 00:26:37.361 [2024-11-15 12:48:17.529915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.361 [2024-11-15 12:48:17.530004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.361 [2024-11-15 12:48:17.530040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.361 [2024-11-15 12:48:17.530054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.361 [2024-11-15 12:48:17.530067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.364 [2024-11-15 12:48:17.530096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.364 qpair failed and we were unable to recover it. 00:26:37.364 [2024-11-15 12:48:17.539886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.364 [2024-11-15 12:48:17.539967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.364 [2024-11-15 12:48:17.539993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.364 [2024-11-15 12:48:17.540008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.364 [2024-11-15 12:48:17.540020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.364 [2024-11-15 12:48:17.540050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.364 qpair failed and we were unable to recover it. 00:26:37.364 [2024-11-15 12:48:17.549948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.364 [2024-11-15 12:48:17.550042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.364 [2024-11-15 12:48:17.550067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.364 [2024-11-15 12:48:17.550089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.364 [2024-11-15 12:48:17.550103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.365 [2024-11-15 12:48:17.550132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.365 qpair failed and we were unable to recover it. 00:26:37.365 [2024-11-15 12:48:17.559970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.365 [2024-11-15 12:48:17.560085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.365 [2024-11-15 12:48:17.560110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.365 [2024-11-15 12:48:17.560124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.365 [2024-11-15 12:48:17.560137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.365 [2024-11-15 12:48:17.560167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.365 qpair failed and we were unable to recover it. 00:26:37.365 [2024-11-15 12:48:17.569981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.365 [2024-11-15 12:48:17.570096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.365 [2024-11-15 12:48:17.570122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.365 [2024-11-15 12:48:17.570136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.365 [2024-11-15 12:48:17.570148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.365 [2024-11-15 12:48:17.570178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.365 qpair failed and we were unable to recover it. 00:26:37.365 [2024-11-15 12:48:17.580040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.365 [2024-11-15 12:48:17.580118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.365 [2024-11-15 12:48:17.580144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.365 [2024-11-15 12:48:17.580158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.365 [2024-11-15 12:48:17.580171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.365 [2024-11-15 12:48:17.580201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.365 qpair failed and we were unable to recover it. 00:26:37.365 [2024-11-15 12:48:17.590086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.365 [2024-11-15 12:48:17.590181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.365 [2024-11-15 12:48:17.590207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.366 [2024-11-15 12:48:17.590222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.366 [2024-11-15 12:48:17.590234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.366 [2024-11-15 12:48:17.590270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.366 qpair failed and we were unable to recover it. 00:26:37.366 [2024-11-15 12:48:17.600081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.366 [2024-11-15 12:48:17.600162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.366 [2024-11-15 12:48:17.600188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.366 [2024-11-15 12:48:17.600203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.366 [2024-11-15 12:48:17.600215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.366 [2024-11-15 12:48:17.600257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.366 qpair failed and we were unable to recover it. 00:26:37.366 [2024-11-15 12:48:17.610113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.366 [2024-11-15 12:48:17.610200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.366 [2024-11-15 12:48:17.610226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.366 [2024-11-15 12:48:17.610241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.366 [2024-11-15 12:48:17.610253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.366 [2024-11-15 12:48:17.610283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.366 qpair failed and we were unable to recover it. 00:26:37.366 [2024-11-15 12:48:17.620171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.366 [2024-11-15 12:48:17.620261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.366 [2024-11-15 12:48:17.620287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.366 [2024-11-15 12:48:17.620302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.367 [2024-11-15 12:48:17.620314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.367 [2024-11-15 12:48:17.620344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.367 qpair failed and we were unable to recover it. 00:26:37.367 [2024-11-15 12:48:17.630172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.367 [2024-11-15 12:48:17.630307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.367 [2024-11-15 12:48:17.630333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.367 [2024-11-15 12:48:17.630348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.367 [2024-11-15 12:48:17.630360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.367 [2024-11-15 12:48:17.630403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.367 qpair failed and we were unable to recover it. 00:26:37.367 [2024-11-15 12:48:17.640202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.367 [2024-11-15 12:48:17.640311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.367 [2024-11-15 12:48:17.640342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.367 [2024-11-15 12:48:17.640358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.368 [2024-11-15 12:48:17.640371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.368 [2024-11-15 12:48:17.640402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.368 qpair failed and we were unable to recover it. 00:26:37.368 [2024-11-15 12:48:17.650231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.368 [2024-11-15 12:48:17.650314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.368 [2024-11-15 12:48:17.650341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.368 [2024-11-15 12:48:17.650357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.368 [2024-11-15 12:48:17.650369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.368 [2024-11-15 12:48:17.650399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.368 qpair failed and we were unable to recover it. 00:26:37.368 [2024-11-15 12:48:17.660225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.368 [2024-11-15 12:48:17.660306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.368 [2024-11-15 12:48:17.660333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.368 [2024-11-15 12:48:17.660347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.368 [2024-11-15 12:48:17.660359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.368 [2024-11-15 12:48:17.660389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.368 qpair failed and we were unable to recover it. 00:26:37.368 [2024-11-15 12:48:17.670284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.368 [2024-11-15 12:48:17.670398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.368 [2024-11-15 12:48:17.670425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.368 [2024-11-15 12:48:17.670440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.368 [2024-11-15 12:48:17.670452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.369 [2024-11-15 12:48:17.670482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.369 qpair failed and we were unable to recover it. 00:26:37.369 [2024-11-15 12:48:17.680311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.369 [2024-11-15 12:48:17.680390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.369 [2024-11-15 12:48:17.680421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.369 [2024-11-15 12:48:17.680436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.369 [2024-11-15 12:48:17.680448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.369 [2024-11-15 12:48:17.680478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.369 qpair failed and we were unable to recover it. 00:26:37.369 [2024-11-15 12:48:17.690316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.369 [2024-11-15 12:48:17.690402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.369 [2024-11-15 12:48:17.690428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.369 [2024-11-15 12:48:17.690443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.369 [2024-11-15 12:48:17.690455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.369 [2024-11-15 12:48:17.690485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.369 qpair failed and we were unable to recover it. 00:26:37.630 [2024-11-15 12:48:17.700429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.631 [2024-11-15 12:48:17.700533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.631 [2024-11-15 12:48:17.700559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.631 [2024-11-15 12:48:17.700574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.631 [2024-11-15 12:48:17.700586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.631 [2024-11-15 12:48:17.700615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.631 qpair failed and we were unable to recover it. 00:26:37.631 [2024-11-15 12:48:17.710497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.631 [2024-11-15 12:48:17.710585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.631 [2024-11-15 12:48:17.710611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.631 [2024-11-15 12:48:17.710626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.631 [2024-11-15 12:48:17.710638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.631 [2024-11-15 12:48:17.710667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.631 qpair failed and we were unable to recover it. 00:26:37.631 [2024-11-15 12:48:17.720496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.631 [2024-11-15 12:48:17.720578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.631 [2024-11-15 12:48:17.720604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.631 [2024-11-15 12:48:17.720619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.631 [2024-11-15 12:48:17.720637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.631 [2024-11-15 12:48:17.720668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.631 qpair failed and we were unable to recover it. 00:26:37.631 [2024-11-15 12:48:17.730486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.631 [2024-11-15 12:48:17.730579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.631 [2024-11-15 12:48:17.730605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.631 [2024-11-15 12:48:17.730619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.631 [2024-11-15 12:48:17.730631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.631 [2024-11-15 12:48:17.730662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.631 qpair failed and we were unable to recover it. 00:26:37.631 [2024-11-15 12:48:17.740487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.631 [2024-11-15 12:48:17.740569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.631 [2024-11-15 12:48:17.740594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.631 [2024-11-15 12:48:17.740609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.631 [2024-11-15 12:48:17.740621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.631 [2024-11-15 12:48:17.740651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.631 qpair failed and we were unable to recover it. 00:26:37.631 [2024-11-15 12:48:17.750538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.631 [2024-11-15 12:48:17.750655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.631 [2024-11-15 12:48:17.750681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.631 [2024-11-15 12:48:17.750696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.631 [2024-11-15 12:48:17.750709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.631 [2024-11-15 12:48:17.750747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.631 qpair failed and we were unable to recover it. 00:26:37.631 [2024-11-15 12:48:17.760523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.631 [2024-11-15 12:48:17.760609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.631 [2024-11-15 12:48:17.760635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.631 [2024-11-15 12:48:17.760649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.631 [2024-11-15 12:48:17.760661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.631 [2024-11-15 12:48:17.760692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.631 qpair failed and we were unable to recover it. 00:26:37.631 [2024-11-15 12:48:17.770571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.631 [2024-11-15 12:48:17.770675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.631 [2024-11-15 12:48:17.770702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.631 [2024-11-15 12:48:17.770726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.631 [2024-11-15 12:48:17.770741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.631 [2024-11-15 12:48:17.770772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.631 qpair failed and we were unable to recover it. 00:26:37.631 [2024-11-15 12:48:17.780654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.631 [2024-11-15 12:48:17.780756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.631 [2024-11-15 12:48:17.780782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.631 [2024-11-15 12:48:17.780797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.631 [2024-11-15 12:48:17.780809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.631 [2024-11-15 12:48:17.780839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.631 qpair failed and we were unable to recover it. 00:26:37.631 [2024-11-15 12:48:17.790670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.631 [2024-11-15 12:48:17.790810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.631 [2024-11-15 12:48:17.790837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.631 [2024-11-15 12:48:17.790851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.631 [2024-11-15 12:48:17.790864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.631 [2024-11-15 12:48:17.790894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.631 qpair failed and we were unable to recover it. 00:26:37.631 [2024-11-15 12:48:17.800658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.631 [2024-11-15 12:48:17.800760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.631 [2024-11-15 12:48:17.800786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.631 [2024-11-15 12:48:17.800801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.631 [2024-11-15 12:48:17.800813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.631 [2024-11-15 12:48:17.800856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.631 qpair failed and we were unable to recover it. 00:26:37.631 [2024-11-15 12:48:17.810772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.631 [2024-11-15 12:48:17.810858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.631 [2024-11-15 12:48:17.810890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.631 [2024-11-15 12:48:17.810906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.631 [2024-11-15 12:48:17.810918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.631 [2024-11-15 12:48:17.810948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.631 qpair failed and we were unable to recover it. 00:26:37.631 [2024-11-15 12:48:17.820713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.631 [2024-11-15 12:48:17.820813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.631 [2024-11-15 12:48:17.820838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.631 [2024-11-15 12:48:17.820853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.631 [2024-11-15 12:48:17.820866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.632 [2024-11-15 12:48:17.820896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.632 qpair failed and we were unable to recover it. 00:26:37.632 [2024-11-15 12:48:17.830859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.632 [2024-11-15 12:48:17.830979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.632 [2024-11-15 12:48:17.831004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.632 [2024-11-15 12:48:17.831018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.632 [2024-11-15 12:48:17.831030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.632 [2024-11-15 12:48:17.831061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.632 qpair failed and we were unable to recover it. 00:26:37.632 [2024-11-15 12:48:17.840781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.632 [2024-11-15 12:48:17.840866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.632 [2024-11-15 12:48:17.840894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.632 [2024-11-15 12:48:17.840910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.632 [2024-11-15 12:48:17.840922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.632 [2024-11-15 12:48:17.840952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.632 qpair failed and we were unable to recover it. 00:26:37.632 [2024-11-15 12:48:17.850812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.632 [2024-11-15 12:48:17.850900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.632 [2024-11-15 12:48:17.850926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.632 [2024-11-15 12:48:17.850940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.632 [2024-11-15 12:48:17.850958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.632 [2024-11-15 12:48:17.850988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.632 qpair failed and we were unable to recover it. 00:26:37.632 [2024-11-15 12:48:17.860790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.632 [2024-11-15 12:48:17.860874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.632 [2024-11-15 12:48:17.860900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.632 [2024-11-15 12:48:17.860914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.632 [2024-11-15 12:48:17.860927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.632 [2024-11-15 12:48:17.860957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.632 qpair failed and we were unable to recover it. 00:26:37.632 [2024-11-15 12:48:17.870872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.632 [2024-11-15 12:48:17.870961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.632 [2024-11-15 12:48:17.870987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.632 [2024-11-15 12:48:17.871002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.632 [2024-11-15 12:48:17.871014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.632 [2024-11-15 12:48:17.871044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.632 qpair failed and we were unable to recover it. 00:26:37.632 [2024-11-15 12:48:17.880955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.632 [2024-11-15 12:48:17.881047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.632 [2024-11-15 12:48:17.881072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.632 [2024-11-15 12:48:17.881087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.632 [2024-11-15 12:48:17.881100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.632 [2024-11-15 12:48:17.881129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.632 qpair failed and we were unable to recover it. 00:26:37.632 [2024-11-15 12:48:17.890920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.632 [2024-11-15 12:48:17.891015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.632 [2024-11-15 12:48:17.891048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.632 [2024-11-15 12:48:17.891066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.632 [2024-11-15 12:48:17.891079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.632 [2024-11-15 12:48:17.891110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.632 qpair failed and we were unable to recover it. 00:26:37.632 [2024-11-15 12:48:17.901055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.632 [2024-11-15 12:48:17.901184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.632 [2024-11-15 12:48:17.901211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.632 [2024-11-15 12:48:17.901225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.632 [2024-11-15 12:48:17.901237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.632 [2024-11-15 12:48:17.901268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.632 qpair failed and we were unable to recover it. 00:26:37.632 [2024-11-15 12:48:17.910941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.632 [2024-11-15 12:48:17.911033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.632 [2024-11-15 12:48:17.911059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.632 [2024-11-15 12:48:17.911074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.632 [2024-11-15 12:48:17.911086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.632 [2024-11-15 12:48:17.911116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.632 qpair failed and we were unable to recover it. 00:26:37.632 [2024-11-15 12:48:17.921092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.632 [2024-11-15 12:48:17.921175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.632 [2024-11-15 12:48:17.921201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.632 [2024-11-15 12:48:17.921216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.632 [2024-11-15 12:48:17.921229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.632 [2024-11-15 12:48:17.921259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.632 qpair failed and we were unable to recover it. 00:26:37.632 [2024-11-15 12:48:17.930997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.632 [2024-11-15 12:48:17.931094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.632 [2024-11-15 12:48:17.931120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.632 [2024-11-15 12:48:17.931134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.632 [2024-11-15 12:48:17.931146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.632 [2024-11-15 12:48:17.931176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.632 qpair failed and we were unable to recover it. 00:26:37.632 [2024-11-15 12:48:17.941144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.632 [2024-11-15 12:48:17.941226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.632 [2024-11-15 12:48:17.941257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.632 [2024-11-15 12:48:17.941273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.632 [2024-11-15 12:48:17.941285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.632 [2024-11-15 12:48:17.941315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.632 qpair failed and we were unable to recover it. 00:26:37.632 [2024-11-15 12:48:17.951054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.632 [2024-11-15 12:48:17.951142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.632 [2024-11-15 12:48:17.951168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.632 [2024-11-15 12:48:17.951182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.632 [2024-11-15 12:48:17.951195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.632 [2024-11-15 12:48:17.951224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.633 qpair failed and we were unable to recover it. 00:26:37.633 [2024-11-15 12:48:17.961115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.633 [2024-11-15 12:48:17.961197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.633 [2024-11-15 12:48:17.961223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.633 [2024-11-15 12:48:17.961237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.633 [2024-11-15 12:48:17.961249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.633 [2024-11-15 12:48:17.961279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.633 qpair failed and we were unable to recover it. 00:26:37.633 [2024-11-15 12:48:17.971205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.633 [2024-11-15 12:48:17.971327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.633 [2024-11-15 12:48:17.971353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.633 [2024-11-15 12:48:17.971368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.633 [2024-11-15 12:48:17.971380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.633 [2024-11-15 12:48:17.971410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.633 qpair failed and we were unable to recover it. 00:26:37.892 [2024-11-15 12:48:17.981149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.892 [2024-11-15 12:48:17.981228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.892 [2024-11-15 12:48:17.981254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.892 [2024-11-15 12:48:17.981274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.892 [2024-11-15 12:48:17.981287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.892 [2024-11-15 12:48:17.981317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.892 qpair failed and we were unable to recover it. 00:26:37.892 [2024-11-15 12:48:17.991193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.892 [2024-11-15 12:48:17.991283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.892 [2024-11-15 12:48:17.991308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.892 [2024-11-15 12:48:17.991323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.892 [2024-11-15 12:48:17.991336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.892 [2024-11-15 12:48:17.991366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.892 qpair failed and we were unable to recover it. 00:26:37.892 [2024-11-15 12:48:18.001234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.892 [2024-11-15 12:48:18.001362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.892 [2024-11-15 12:48:18.001388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.892 [2024-11-15 12:48:18.001403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.892 [2024-11-15 12:48:18.001416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.892 [2024-11-15 12:48:18.001447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.892 qpair failed and we were unable to recover it. 00:26:37.892 [2024-11-15 12:48:18.011266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.892 [2024-11-15 12:48:18.011370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.892 [2024-11-15 12:48:18.011395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.892 [2024-11-15 12:48:18.011409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.892 [2024-11-15 12:48:18.011422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.892 [2024-11-15 12:48:18.011452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.892 qpair failed and we were unable to recover it. 00:26:37.892 [2024-11-15 12:48:18.021281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.892 [2024-11-15 12:48:18.021368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.892 [2024-11-15 12:48:18.021396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.892 [2024-11-15 12:48:18.021411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.892 [2024-11-15 12:48:18.021423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.892 [2024-11-15 12:48:18.021453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.892 qpair failed and we were unable to recover it. 00:26:37.892 [2024-11-15 12:48:18.031333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.892 [2024-11-15 12:48:18.031424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.892 [2024-11-15 12:48:18.031449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.892 [2024-11-15 12:48:18.031464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.892 [2024-11-15 12:48:18.031476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.892 [2024-11-15 12:48:18.031506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.892 qpair failed and we were unable to recover it. 00:26:37.892 [2024-11-15 12:48:18.041326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.892 [2024-11-15 12:48:18.041407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.892 [2024-11-15 12:48:18.041432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.893 [2024-11-15 12:48:18.041447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.893 [2024-11-15 12:48:18.041459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.893 [2024-11-15 12:48:18.041489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.893 qpair failed and we were unable to recover it. 00:26:37.893 [2024-11-15 12:48:18.051343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.893 [2024-11-15 12:48:18.051434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.893 [2024-11-15 12:48:18.051460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.893 [2024-11-15 12:48:18.051474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.893 [2024-11-15 12:48:18.051486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.893 [2024-11-15 12:48:18.051516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.893 qpair failed and we were unable to recover it. 00:26:37.893 [2024-11-15 12:48:18.061475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.893 [2024-11-15 12:48:18.061559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.893 [2024-11-15 12:48:18.061585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.893 [2024-11-15 12:48:18.061599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.893 [2024-11-15 12:48:18.061611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.893 [2024-11-15 12:48:18.061642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.893 qpair failed and we were unable to recover it. 00:26:37.893 [2024-11-15 12:48:18.071439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.893 [2024-11-15 12:48:18.071538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.893 [2024-11-15 12:48:18.071564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.893 [2024-11-15 12:48:18.071579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.893 [2024-11-15 12:48:18.071591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.893 [2024-11-15 12:48:18.071621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.893 qpair failed and we were unable to recover it. 00:26:37.893 [2024-11-15 12:48:18.081464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.893 [2024-11-15 12:48:18.081551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.893 [2024-11-15 12:48:18.081580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.893 [2024-11-15 12:48:18.081596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.893 [2024-11-15 12:48:18.081609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.893 [2024-11-15 12:48:18.081640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.893 qpair failed and we were unable to recover it. 00:26:37.893 [2024-11-15 12:48:18.091495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.893 [2024-11-15 12:48:18.091574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.893 [2024-11-15 12:48:18.091601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.893 [2024-11-15 12:48:18.091616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.893 [2024-11-15 12:48:18.091628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.893 [2024-11-15 12:48:18.091658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.893 qpair failed and we were unable to recover it. 00:26:37.893 [2024-11-15 12:48:18.101515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.893 [2024-11-15 12:48:18.101595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.893 [2024-11-15 12:48:18.101621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.893 [2024-11-15 12:48:18.101636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.893 [2024-11-15 12:48:18.101648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.893 [2024-11-15 12:48:18.101679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.893 qpair failed and we were unable to recover it. 00:26:37.893 [2024-11-15 12:48:18.111657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.893 [2024-11-15 12:48:18.111755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.893 [2024-11-15 12:48:18.111784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.893 [2024-11-15 12:48:18.111805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.893 [2024-11-15 12:48:18.111818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.893 [2024-11-15 12:48:18.111849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.893 qpair failed and we were unable to recover it. 00:26:37.893 [2024-11-15 12:48:18.121564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.893 [2024-11-15 12:48:18.121699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.893 [2024-11-15 12:48:18.121732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.893 [2024-11-15 12:48:18.121749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.893 [2024-11-15 12:48:18.121761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.893 [2024-11-15 12:48:18.121791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.893 qpair failed and we were unable to recover it. 00:26:37.893 [2024-11-15 12:48:18.131564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.893 [2024-11-15 12:48:18.131645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.893 [2024-11-15 12:48:18.131671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.893 [2024-11-15 12:48:18.131685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.893 [2024-11-15 12:48:18.131697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.893 [2024-11-15 12:48:18.131735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.893 qpair failed and we were unable to recover it. 00:26:37.893 [2024-11-15 12:48:18.141693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.893 [2024-11-15 12:48:18.141803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.893 [2024-11-15 12:48:18.141839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.893 [2024-11-15 12:48:18.141857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.893 [2024-11-15 12:48:18.141870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.893 [2024-11-15 12:48:18.141902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.893 qpair failed and we were unable to recover it. 00:26:37.893 [2024-11-15 12:48:18.151668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.893 [2024-11-15 12:48:18.151785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.893 [2024-11-15 12:48:18.151813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.893 [2024-11-15 12:48:18.151828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.893 [2024-11-15 12:48:18.151840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.893 [2024-11-15 12:48:18.151877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.893 qpair failed and we were unable to recover it. 00:26:37.893 [2024-11-15 12:48:18.161665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.893 [2024-11-15 12:48:18.161760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.893 [2024-11-15 12:48:18.161787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.893 [2024-11-15 12:48:18.161802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.893 [2024-11-15 12:48:18.161814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.893 [2024-11-15 12:48:18.161845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.893 qpair failed and we were unable to recover it. 00:26:37.893 [2024-11-15 12:48:18.171728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.893 [2024-11-15 12:48:18.171855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.893 [2024-11-15 12:48:18.171881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.893 [2024-11-15 12:48:18.171895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.894 [2024-11-15 12:48:18.171907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.894 [2024-11-15 12:48:18.171938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.894 qpair failed and we were unable to recover it. 00:26:37.894 [2024-11-15 12:48:18.181736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.894 [2024-11-15 12:48:18.181827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.894 [2024-11-15 12:48:18.181854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.894 [2024-11-15 12:48:18.181869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.894 [2024-11-15 12:48:18.181881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.894 [2024-11-15 12:48:18.181911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.894 qpair failed and we were unable to recover it. 00:26:37.894 [2024-11-15 12:48:18.191818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.894 [2024-11-15 12:48:18.191941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.894 [2024-11-15 12:48:18.191968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.894 [2024-11-15 12:48:18.191983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.894 [2024-11-15 12:48:18.191997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.894 [2024-11-15 12:48:18.192027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.894 qpair failed and we were unable to recover it. 00:26:37.894 [2024-11-15 12:48:18.201787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.894 [2024-11-15 12:48:18.201872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.894 [2024-11-15 12:48:18.201898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.894 [2024-11-15 12:48:18.201912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.894 [2024-11-15 12:48:18.201925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.894 [2024-11-15 12:48:18.201955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.894 qpair failed and we were unable to recover it. 00:26:37.894 [2024-11-15 12:48:18.211849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.894 [2024-11-15 12:48:18.211935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.894 [2024-11-15 12:48:18.211961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.894 [2024-11-15 12:48:18.211976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.894 [2024-11-15 12:48:18.211988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.894 [2024-11-15 12:48:18.212018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.894 qpair failed and we were unable to recover it. 00:26:37.894 [2024-11-15 12:48:18.221886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.894 [2024-11-15 12:48:18.221976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.894 [2024-11-15 12:48:18.222002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.894 [2024-11-15 12:48:18.222016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.894 [2024-11-15 12:48:18.222029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.894 [2024-11-15 12:48:18.222059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.894 qpair failed and we were unable to recover it. 00:26:37.894 [2024-11-15 12:48:18.231896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:37.894 [2024-11-15 12:48:18.231989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:37.894 [2024-11-15 12:48:18.232015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:37.894 [2024-11-15 12:48:18.232030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:37.894 [2024-11-15 12:48:18.232042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:37.894 [2024-11-15 12:48:18.232072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:37.894 qpair failed and we were unable to recover it. 00:26:38.153 [2024-11-15 12:48:18.241995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.153 [2024-11-15 12:48:18.242076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.153 [2024-11-15 12:48:18.242109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.153 [2024-11-15 12:48:18.242125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.153 [2024-11-15 12:48:18.242138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.153 [2024-11-15 12:48:18.242168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.153 qpair failed and we were unable to recover it. 00:26:38.153 [2024-11-15 12:48:18.252035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.153 [2024-11-15 12:48:18.252123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.153 [2024-11-15 12:48:18.252149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.153 [2024-11-15 12:48:18.252164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.153 [2024-11-15 12:48:18.252176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.153 [2024-11-15 12:48:18.252206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.153 qpair failed and we were unable to recover it. 00:26:38.153 [2024-11-15 12:48:18.262001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.153 [2024-11-15 12:48:18.262086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.153 [2024-11-15 12:48:18.262112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.153 [2024-11-15 12:48:18.262127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.153 [2024-11-15 12:48:18.262139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.153 [2024-11-15 12:48:18.262169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.153 qpair failed and we were unable to recover it. 00:26:38.153 [2024-11-15 12:48:18.272064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.153 [2024-11-15 12:48:18.272156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.153 [2024-11-15 12:48:18.272182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.153 [2024-11-15 12:48:18.272197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.153 [2024-11-15 12:48:18.272210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.153 [2024-11-15 12:48:18.272239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.153 qpair failed and we were unable to recover it. 00:26:38.153 [2024-11-15 12:48:18.282003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.153 [2024-11-15 12:48:18.282093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.153 [2024-11-15 12:48:18.282118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.153 [2024-11-15 12:48:18.282133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.153 [2024-11-15 12:48:18.282151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.153 [2024-11-15 12:48:18.282181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.153 qpair failed and we were unable to recover it. 00:26:38.153 [2024-11-15 12:48:18.292039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.153 [2024-11-15 12:48:18.292134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.153 [2024-11-15 12:48:18.292160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.153 [2024-11-15 12:48:18.292175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.153 [2024-11-15 12:48:18.292187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.154 [2024-11-15 12:48:18.292217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.154 qpair failed and we were unable to recover it. 00:26:38.154 [2024-11-15 12:48:18.302092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.154 [2024-11-15 12:48:18.302177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.154 [2024-11-15 12:48:18.302202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.154 [2024-11-15 12:48:18.302217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.154 [2024-11-15 12:48:18.302229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.154 [2024-11-15 12:48:18.302258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.154 qpair failed and we were unable to recover it. 00:26:38.154 [2024-11-15 12:48:18.312261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.154 [2024-11-15 12:48:18.312388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.154 [2024-11-15 12:48:18.312413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.154 [2024-11-15 12:48:18.312428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.154 [2024-11-15 12:48:18.312440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.154 [2024-11-15 12:48:18.312470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.154 qpair failed and we were unable to recover it. 00:26:38.154 [2024-11-15 12:48:18.322195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.154 [2024-11-15 12:48:18.322286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.154 [2024-11-15 12:48:18.322311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.154 [2024-11-15 12:48:18.322326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.154 [2024-11-15 12:48:18.322338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.154 [2024-11-15 12:48:18.322367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.154 qpair failed and we were unable to recover it. 00:26:38.154 [2024-11-15 12:48:18.332157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.154 [2024-11-15 12:48:18.332252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.154 [2024-11-15 12:48:18.332277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.154 [2024-11-15 12:48:18.332291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.154 [2024-11-15 12:48:18.332304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.154 [2024-11-15 12:48:18.332334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.154 qpair failed and we were unable to recover it. 00:26:38.154 [2024-11-15 12:48:18.342214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.154 [2024-11-15 12:48:18.342300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.154 [2024-11-15 12:48:18.342325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.154 [2024-11-15 12:48:18.342340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.154 [2024-11-15 12:48:18.342352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.154 [2024-11-15 12:48:18.342382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.154 qpair failed and we were unable to recover it. 00:26:38.154 [2024-11-15 12:48:18.352257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.154 [2024-11-15 12:48:18.352371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.154 [2024-11-15 12:48:18.352396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.154 [2024-11-15 12:48:18.352411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.154 [2024-11-15 12:48:18.352423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.154 [2024-11-15 12:48:18.352453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.154 qpair failed and we were unable to recover it. 00:26:38.154 [2024-11-15 12:48:18.362383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.154 [2024-11-15 12:48:18.362523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.154 [2024-11-15 12:48:18.362550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.154 [2024-11-15 12:48:18.362565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.154 [2024-11-15 12:48:18.362577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.154 [2024-11-15 12:48:18.362607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.154 qpair failed and we were unable to recover it. 00:26:38.154 [2024-11-15 12:48:18.372306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.154 [2024-11-15 12:48:18.372390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.154 [2024-11-15 12:48:18.372421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.154 [2024-11-15 12:48:18.372437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.154 [2024-11-15 12:48:18.372449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.154 [2024-11-15 12:48:18.372479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.154 qpair failed and we were unable to recover it. 00:26:38.154 [2024-11-15 12:48:18.382342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.154 [2024-11-15 12:48:18.382459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.154 [2024-11-15 12:48:18.382484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.154 [2024-11-15 12:48:18.382499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.154 [2024-11-15 12:48:18.382511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.154 [2024-11-15 12:48:18.382541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.154 qpair failed and we were unable to recover it. 00:26:38.154 [2024-11-15 12:48:18.392378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.154 [2024-11-15 12:48:18.392510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.154 [2024-11-15 12:48:18.392537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.154 [2024-11-15 12:48:18.392553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.154 [2024-11-15 12:48:18.392565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.154 [2024-11-15 12:48:18.392597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.154 qpair failed and we were unable to recover it. 00:26:38.154 [2024-11-15 12:48:18.402356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.154 [2024-11-15 12:48:18.402445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.154 [2024-11-15 12:48:18.402472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.154 [2024-11-15 12:48:18.402487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.154 [2024-11-15 12:48:18.402500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.154 [2024-11-15 12:48:18.402531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.154 qpair failed and we were unable to recover it. 00:26:38.154 [2024-11-15 12:48:18.412405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.154 [2024-11-15 12:48:18.412501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.154 [2024-11-15 12:48:18.412527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.154 [2024-11-15 12:48:18.412541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.154 [2024-11-15 12:48:18.412560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.154 [2024-11-15 12:48:18.412591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.154 qpair failed and we were unable to recover it. 00:26:38.154 [2024-11-15 12:48:18.422508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.154 [2024-11-15 12:48:18.422595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.154 [2024-11-15 12:48:18.422621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.154 [2024-11-15 12:48:18.422635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.154 [2024-11-15 12:48:18.422648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.155 [2024-11-15 12:48:18.422678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.155 qpair failed and we were unable to recover it. 00:26:38.155 [2024-11-15 12:48:18.432508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.155 [2024-11-15 12:48:18.432650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.155 [2024-11-15 12:48:18.432675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.155 [2024-11-15 12:48:18.432689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.155 [2024-11-15 12:48:18.432701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.155 [2024-11-15 12:48:18.432738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.155 qpair failed and we were unable to recover it. 00:26:38.155 [2024-11-15 12:48:18.442478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.155 [2024-11-15 12:48:18.442569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.155 [2024-11-15 12:48:18.442596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.155 [2024-11-15 12:48:18.442610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.155 [2024-11-15 12:48:18.442623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.155 [2024-11-15 12:48:18.442652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.155 qpair failed and we were unable to recover it. 00:26:38.155 [2024-11-15 12:48:18.452497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.155 [2024-11-15 12:48:18.452582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.155 [2024-11-15 12:48:18.452608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.155 [2024-11-15 12:48:18.452623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.155 [2024-11-15 12:48:18.452635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.155 [2024-11-15 12:48:18.452665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.155 qpair failed and we were unable to recover it. 00:26:38.155 [2024-11-15 12:48:18.462612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.155 [2024-11-15 12:48:18.462695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.155 [2024-11-15 12:48:18.462728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.155 [2024-11-15 12:48:18.462745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.155 [2024-11-15 12:48:18.462757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.155 [2024-11-15 12:48:18.462787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.155 qpair failed and we were unable to recover it. 00:26:38.155 [2024-11-15 12:48:18.472659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.155 [2024-11-15 12:48:18.472748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.155 [2024-11-15 12:48:18.472773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.155 [2024-11-15 12:48:18.472788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.155 [2024-11-15 12:48:18.472800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.155 [2024-11-15 12:48:18.472831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.155 qpair failed and we were unable to recover it. 00:26:38.155 [2024-11-15 12:48:18.482676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.155 [2024-11-15 12:48:18.482767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.155 [2024-11-15 12:48:18.482793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.155 [2024-11-15 12:48:18.482807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.155 [2024-11-15 12:48:18.482820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.155 [2024-11-15 12:48:18.482850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.155 qpair failed and we were unable to recover it. 00:26:38.155 [2024-11-15 12:48:18.492700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.155 [2024-11-15 12:48:18.492793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.155 [2024-11-15 12:48:18.492819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.155 [2024-11-15 12:48:18.492833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.155 [2024-11-15 12:48:18.492846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.155 [2024-11-15 12:48:18.492876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.155 qpair failed and we were unable to recover it. 00:26:38.414 [2024-11-15 12:48:18.502665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.414 [2024-11-15 12:48:18.502775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.414 [2024-11-15 12:48:18.502807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.414 [2024-11-15 12:48:18.502822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.414 [2024-11-15 12:48:18.502835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.414 [2024-11-15 12:48:18.502865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.414 qpair failed and we were unable to recover it. 00:26:38.414 [2024-11-15 12:48:18.512751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.414 [2024-11-15 12:48:18.512873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.414 [2024-11-15 12:48:18.512899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.414 [2024-11-15 12:48:18.512913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.414 [2024-11-15 12:48:18.512926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.414 [2024-11-15 12:48:18.512955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.414 qpair failed and we were unable to recover it. 00:26:38.414 [2024-11-15 12:48:18.522703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.414 [2024-11-15 12:48:18.522801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.414 [2024-11-15 12:48:18.522827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.414 [2024-11-15 12:48:18.522842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.414 [2024-11-15 12:48:18.522854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.414 [2024-11-15 12:48:18.522883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.414 qpair failed and we were unable to recover it. 00:26:38.414 [2024-11-15 12:48:18.532746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.414 [2024-11-15 12:48:18.532843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.414 [2024-11-15 12:48:18.532869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.414 [2024-11-15 12:48:18.532884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.414 [2024-11-15 12:48:18.532896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.414 [2024-11-15 12:48:18.532926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.414 qpair failed and we were unable to recover it. 00:26:38.414 [2024-11-15 12:48:18.542822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.415 [2024-11-15 12:48:18.542906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.415 [2024-11-15 12:48:18.542932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.415 [2024-11-15 12:48:18.542952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.415 [2024-11-15 12:48:18.542966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.415 [2024-11-15 12:48:18.542996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.415 qpair failed and we were unable to recover it. 00:26:38.415 [2024-11-15 12:48:18.552807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.415 [2024-11-15 12:48:18.552896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.415 [2024-11-15 12:48:18.552922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.415 [2024-11-15 12:48:18.552936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.415 [2024-11-15 12:48:18.552949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.415 [2024-11-15 12:48:18.552979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.415 qpair failed and we were unable to recover it. 00:26:38.415 [2024-11-15 12:48:18.562885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.415 [2024-11-15 12:48:18.562969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.415 [2024-11-15 12:48:18.562994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.415 [2024-11-15 12:48:18.563008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.415 [2024-11-15 12:48:18.563020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.415 [2024-11-15 12:48:18.563050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.415 qpair failed and we were unable to recover it. 00:26:38.415 [2024-11-15 12:48:18.572890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.415 [2024-11-15 12:48:18.572975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.415 [2024-11-15 12:48:18.573000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.415 [2024-11-15 12:48:18.573015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.415 [2024-11-15 12:48:18.573027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.415 [2024-11-15 12:48:18.573057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.415 qpair failed and we were unable to recover it. 00:26:38.415 [2024-11-15 12:48:18.582887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.415 [2024-11-15 12:48:18.582970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.415 [2024-11-15 12:48:18.582996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.415 [2024-11-15 12:48:18.583011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.415 [2024-11-15 12:48:18.583023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.415 [2024-11-15 12:48:18.583058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.415 qpair failed and we were unable to recover it. 00:26:38.415 [2024-11-15 12:48:18.592954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.415 [2024-11-15 12:48:18.593047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.415 [2024-11-15 12:48:18.593073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.415 [2024-11-15 12:48:18.593088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.415 [2024-11-15 12:48:18.593100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.415 [2024-11-15 12:48:18.593130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.415 qpair failed and we were unable to recover it. 00:26:38.415 [2024-11-15 12:48:18.602997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.415 [2024-11-15 12:48:18.603098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.415 [2024-11-15 12:48:18.603123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.415 [2024-11-15 12:48:18.603137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.415 [2024-11-15 12:48:18.603150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.415 [2024-11-15 12:48:18.603179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.415 qpair failed and we were unable to recover it. 00:26:38.415 [2024-11-15 12:48:18.613032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.415 [2024-11-15 12:48:18.613117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.415 [2024-11-15 12:48:18.613143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.415 [2024-11-15 12:48:18.613158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.415 [2024-11-15 12:48:18.613170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.415 [2024-11-15 12:48:18.613213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.415 qpair failed and we were unable to recover it. 00:26:38.415 [2024-11-15 12:48:18.623067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.415 [2024-11-15 12:48:18.623150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.415 [2024-11-15 12:48:18.623176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.415 [2024-11-15 12:48:18.623191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.415 [2024-11-15 12:48:18.623203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.415 [2024-11-15 12:48:18.623234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.415 qpair failed and we were unable to recover it. 00:26:38.415 [2024-11-15 12:48:18.633067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.415 [2024-11-15 12:48:18.633194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.415 [2024-11-15 12:48:18.633221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.415 [2024-11-15 12:48:18.633235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.415 [2024-11-15 12:48:18.633248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.415 [2024-11-15 12:48:18.633278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.415 qpair failed and we were unable to recover it. 00:26:38.415 [2024-11-15 12:48:18.643092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.415 [2024-11-15 12:48:18.643213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.415 [2024-11-15 12:48:18.643239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.415 [2024-11-15 12:48:18.643254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.415 [2024-11-15 12:48:18.643267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.415 [2024-11-15 12:48:18.643299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.415 qpair failed and we were unable to recover it. 00:26:38.415 [2024-11-15 12:48:18.653089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.415 [2024-11-15 12:48:18.653219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.415 [2024-11-15 12:48:18.653246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.415 [2024-11-15 12:48:18.653262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.415 [2024-11-15 12:48:18.653274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.415 [2024-11-15 12:48:18.653305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.415 qpair failed and we were unable to recover it. 00:26:38.415 [2024-11-15 12:48:18.663150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.415 [2024-11-15 12:48:18.663243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.415 [2024-11-15 12:48:18.663278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.415 [2024-11-15 12:48:18.663294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.415 [2024-11-15 12:48:18.663307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.415 [2024-11-15 12:48:18.663343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.415 qpair failed and we were unable to recover it. 00:26:38.415 [2024-11-15 12:48:18.673174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.415 [2024-11-15 12:48:18.673286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.416 [2024-11-15 12:48:18.673313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.416 [2024-11-15 12:48:18.673339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.416 [2024-11-15 12:48:18.673353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.416 [2024-11-15 12:48:18.673384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.416 qpair failed and we were unable to recover it. 00:26:38.416 [2024-11-15 12:48:18.683200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.416 [2024-11-15 12:48:18.683291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.416 [2024-11-15 12:48:18.683318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.416 [2024-11-15 12:48:18.683333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.416 [2024-11-15 12:48:18.683346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.416 [2024-11-15 12:48:18.683377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.416 qpair failed and we were unable to recover it. 00:26:38.416 [2024-11-15 12:48:18.693224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.416 [2024-11-15 12:48:18.693317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.416 [2024-11-15 12:48:18.693343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.416 [2024-11-15 12:48:18.693358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.416 [2024-11-15 12:48:18.693370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.416 [2024-11-15 12:48:18.693400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.416 qpair failed and we were unable to recover it. 00:26:38.416 [2024-11-15 12:48:18.703278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.416 [2024-11-15 12:48:18.703392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.416 [2024-11-15 12:48:18.703417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.416 [2024-11-15 12:48:18.703432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.416 [2024-11-15 12:48:18.703444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.416 [2024-11-15 12:48:18.703473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.416 qpair failed and we were unable to recover it. 00:26:38.416 [2024-11-15 12:48:18.713348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.416 [2024-11-15 12:48:18.713436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.416 [2024-11-15 12:48:18.713462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.416 [2024-11-15 12:48:18.713476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.416 [2024-11-15 12:48:18.713489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.416 [2024-11-15 12:48:18.713524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.416 qpair failed and we were unable to recover it. 00:26:38.416 [2024-11-15 12:48:18.723319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.416 [2024-11-15 12:48:18.723439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.416 [2024-11-15 12:48:18.723465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.416 [2024-11-15 12:48:18.723479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.416 [2024-11-15 12:48:18.723491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.416 [2024-11-15 12:48:18.723521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.416 qpair failed and we were unable to recover it. 00:26:38.416 [2024-11-15 12:48:18.733358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.416 [2024-11-15 12:48:18.733479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.416 [2024-11-15 12:48:18.733505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.416 [2024-11-15 12:48:18.733520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.416 [2024-11-15 12:48:18.733532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.416 [2024-11-15 12:48:18.733562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.416 qpair failed and we were unable to recover it. 00:26:38.416 [2024-11-15 12:48:18.743373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.416 [2024-11-15 12:48:18.743452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.416 [2024-11-15 12:48:18.743478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.416 [2024-11-15 12:48:18.743492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.416 [2024-11-15 12:48:18.743505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.416 [2024-11-15 12:48:18.743534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.416 qpair failed and we were unable to recover it. 00:26:38.416 [2024-11-15 12:48:18.753470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.416 [2024-11-15 12:48:18.753560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.416 [2024-11-15 12:48:18.753586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.416 [2024-11-15 12:48:18.753600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.416 [2024-11-15 12:48:18.753612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.416 [2024-11-15 12:48:18.753642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.416 qpair failed and we were unable to recover it. 00:26:38.674 [2024-11-15 12:48:18.763422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.674 [2024-11-15 12:48:18.763506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.674 [2024-11-15 12:48:18.763532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.674 [2024-11-15 12:48:18.763547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.674 [2024-11-15 12:48:18.763559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.674 [2024-11-15 12:48:18.763589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.674 qpair failed and we were unable to recover it. 00:26:38.675 [2024-11-15 12:48:18.773526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.675 [2024-11-15 12:48:18.773611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.675 [2024-11-15 12:48:18.773637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.675 [2024-11-15 12:48:18.773651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.675 [2024-11-15 12:48:18.773664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea04000b90 00:26:38.675 [2024-11-15 12:48:18.773693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.675 qpair failed and we were unable to recover it. 00:26:38.675 [2024-11-15 12:48:18.783526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.675 [2024-11-15 12:48:18.783668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.675 [2024-11-15 12:48:18.783703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.675 [2024-11-15 12:48:18.783729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.675 [2024-11-15 12:48:18.783744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea00000b90 00:26:38.675 [2024-11-15 12:48:18.783777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:38.675 qpair failed and we were unable to recover it. 00:26:38.675 [2024-11-15 12:48:18.793549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.675 [2024-11-15 12:48:18.793674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.675 [2024-11-15 12:48:18.793702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.675 [2024-11-15 12:48:18.793723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.675 [2024-11-15 12:48:18.793738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea00000b90 00:26:38.675 [2024-11-15 12:48:18.793769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:38.675 qpair failed and we were unable to recover it. 00:26:38.675 [2024-11-15 12:48:18.803573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.675 [2024-11-15 12:48:18.803664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.675 [2024-11-15 12:48:18.803703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.675 [2024-11-15 12:48:18.803728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.675 [2024-11-15 12:48:18.803743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdefa0 00:26:38.675 [2024-11-15 12:48:18.803774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:38.675 qpair failed and we were unable to recover it. 00:26:38.675 [2024-11-15 12:48:18.813545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.675 [2024-11-15 12:48:18.813634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.675 [2024-11-15 12:48:18.813664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.675 [2024-11-15 12:48:18.813679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.675 [2024-11-15 12:48:18.813692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bdefa0 00:26:38.675 [2024-11-15 12:48:18.813730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:38.675 qpair failed and we were unable to recover it. 00:26:38.675 [2024-11-15 12:48:18.813859] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:26:38.675 A controller has encountered a failure and is being reset. 00:26:38.675 [2024-11-15 12:48:18.823663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.675 [2024-11-15 12:48:18.823765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.675 [2024-11-15 12:48:18.823801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.675 [2024-11-15 12:48:18.823827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.675 [2024-11-15 12:48:18.823850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea0c000b90 00:26:38.675 [2024-11-15 12:48:18.823910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:38.675 qpair failed and we were unable to recover it. 00:26:38.675 [2024-11-15 12:48:18.833611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:38.675 [2024-11-15 12:48:18.833712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:38.675 [2024-11-15 12:48:18.833751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:38.675 [2024-11-15 12:48:18.833776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:38.675 [2024-11-15 12:48:18.833800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fea0c000b90 00:26:38.675 [2024-11-15 12:48:18.833843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:38.675 qpair failed and we were unable to recover it. 00:26:38.675 Controller properly reset. 00:26:38.675 Initializing NVMe Controllers 00:26:38.675 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:38.675 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:38.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:38.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:38.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:38.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:38.675 Initialization complete. Launching workers. 00:26:38.675 Starting thread on core 1 00:26:38.675 Starting thread on core 2 00:26:38.675 Starting thread on core 3 00:26:38.675 Starting thread on core 0 00:26:38.675 12:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:38.675 00:26:38.675 real 0m11.048s 00:26:38.675 user 0m19.208s 00:26:38.675 sys 0m5.145s 00:26:38.675 12:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:38.675 12:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:38.675 ************************************ 00:26:38.675 END TEST nvmf_target_disconnect_tc2 00:26:38.675 ************************************ 00:26:38.675 12:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:38.675 12:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:38.675 12:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:38.675 12:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:38.675 12:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:26:38.675 12:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:38.675 12:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:26:38.675 12:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:38.675 12:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:38.675 rmmod nvme_tcp 00:26:38.675 rmmod nvme_fabrics 00:26:38.675 rmmod nvme_keyring 00:26:38.934 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:38.934 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:26:38.934 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:26:38.934 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1136542 ']' 00:26:38.934 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1136542 00:26:38.934 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1136542 ']' 00:26:38.934 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1136542 00:26:38.934 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:26:38.934 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.934 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1136542 00:26:38.934 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:26:38.934 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:26:38.934 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1136542' 00:26:38.934 killing process with pid 1136542 00:26:38.934 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1136542 00:26:38.934 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1136542 00:26:39.194 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:39.194 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:39.194 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:39.194 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:26:39.194 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:26:39.194 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:39.194 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:26:39.194 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.194 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:39.194 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.194 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.194 12:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.100 12:48:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:41.100 00:26:41.100 real 0m16.090s 00:26:41.100 user 0m46.580s 00:26:41.100 sys 0m7.250s 00:26:41.101 12:48:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:41.101 12:48:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:41.101 ************************************ 00:26:41.101 END TEST nvmf_target_disconnect 00:26:41.101 ************************************ 00:26:41.101 12:48:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:41.101 00:26:41.101 real 5m6.583s 00:26:41.101 user 10m51.045s 00:26:41.101 sys 1m14.375s 00:26:41.101 12:48:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:41.101 12:48:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.101 ************************************ 00:26:41.101 END TEST nvmf_host 00:26:41.101 ************************************ 00:26:41.101 12:48:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:41.101 12:48:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:26:41.101 12:48:21 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:41.101 12:48:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:41.101 12:48:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:41.101 12:48:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.101 ************************************ 00:26:41.101 START TEST nvmf_target_core_interrupt_mode 00:26:41.101 ************************************ 00:26:41.101 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:41.359 * Looking for test storage... 00:26:41.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:41.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.360 --rc genhtml_branch_coverage=1 00:26:41.360 --rc genhtml_function_coverage=1 00:26:41.360 --rc genhtml_legend=1 00:26:41.360 --rc geninfo_all_blocks=1 00:26:41.360 --rc geninfo_unexecuted_blocks=1 00:26:41.360 00:26:41.360 ' 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:41.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.360 --rc genhtml_branch_coverage=1 00:26:41.360 --rc genhtml_function_coverage=1 00:26:41.360 --rc genhtml_legend=1 00:26:41.360 --rc geninfo_all_blocks=1 00:26:41.360 --rc geninfo_unexecuted_blocks=1 00:26:41.360 00:26:41.360 ' 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:41.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.360 --rc genhtml_branch_coverage=1 00:26:41.360 --rc genhtml_function_coverage=1 00:26:41.360 --rc genhtml_legend=1 00:26:41.360 --rc geninfo_all_blocks=1 00:26:41.360 --rc geninfo_unexecuted_blocks=1 00:26:41.360 00:26:41.360 ' 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:41.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.360 --rc genhtml_branch_coverage=1 00:26:41.360 --rc genhtml_function_coverage=1 00:26:41.360 --rc genhtml_legend=1 00:26:41.360 --rc geninfo_all_blocks=1 00:26:41.360 --rc geninfo_unexecuted_blocks=1 00:26:41.360 00:26:41.360 ' 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:26:41.360 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:26:41.361 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:41.361 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:41.361 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:41.361 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:41.361 ************************************ 00:26:41.361 START TEST nvmf_abort 00:26:41.361 ************************************ 00:26:41.361 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:41.361 * Looking for test storage... 00:26:41.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:41.361 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:41.361 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:26:41.361 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:41.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.620 --rc genhtml_branch_coverage=1 00:26:41.620 --rc genhtml_function_coverage=1 00:26:41.620 --rc genhtml_legend=1 00:26:41.620 --rc geninfo_all_blocks=1 00:26:41.620 --rc geninfo_unexecuted_blocks=1 00:26:41.620 00:26:41.620 ' 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:41.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.620 --rc genhtml_branch_coverage=1 00:26:41.620 --rc genhtml_function_coverage=1 00:26:41.620 --rc genhtml_legend=1 00:26:41.620 --rc geninfo_all_blocks=1 00:26:41.620 --rc geninfo_unexecuted_blocks=1 00:26:41.620 00:26:41.620 ' 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:41.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.620 --rc genhtml_branch_coverage=1 00:26:41.620 --rc genhtml_function_coverage=1 00:26:41.620 --rc genhtml_legend=1 00:26:41.620 --rc geninfo_all_blocks=1 00:26:41.620 --rc geninfo_unexecuted_blocks=1 00:26:41.620 00:26:41.620 ' 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:41.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.620 --rc genhtml_branch_coverage=1 00:26:41.620 --rc genhtml_function_coverage=1 00:26:41.620 --rc genhtml_legend=1 00:26:41.620 --rc geninfo_all_blocks=1 00:26:41.620 --rc geninfo_unexecuted_blocks=1 00:26:41.620 00:26:41.620 ' 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.620 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:26:41.621 12:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.152 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:44.153 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:44.153 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:44.153 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:44.153 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.153 12:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:44.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:26:44.153 00:26:44.153 --- 10.0.0.2 ping statistics --- 00:26:44.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.153 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:26:44.153 00:26:44.153 --- 10.0.0.1 ping statistics --- 00:26:44.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.153 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1139468 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1139468 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1139468 ']' 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:44.153 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:44.154 [2024-11-15 12:48:24.144809] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:44.154 [2024-11-15 12:48:24.145898] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:26:44.154 [2024-11-15 12:48:24.145953] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.154 [2024-11-15 12:48:24.217850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:44.154 [2024-11-15 12:48:24.278189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.154 [2024-11-15 12:48:24.278248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.154 [2024-11-15 12:48:24.278271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.154 [2024-11-15 12:48:24.278290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.154 [2024-11-15 12:48:24.278305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.154 [2024-11-15 12:48:24.279920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.154 [2024-11-15 12:48:24.279973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:44.154 [2024-11-15 12:48:24.279977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.154 [2024-11-15 12:48:24.376949] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:44.154 [2024-11-15 12:48:24.377111] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:44.154 [2024-11-15 12:48:24.377122] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:44.154 [2024-11-15 12:48:24.377392] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:44.154 [2024-11-15 12:48:24.424676] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:44.154 Malloc0 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:44.154 Delay0 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.154 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:44.412 [2024-11-15 12:48:24.496886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.412 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.412 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:44.412 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.412 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:44.412 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.412 12:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:44.412 [2024-11-15 12:48:24.566240] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:46.310 Initializing NVMe Controllers 00:26:46.310 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:46.310 controller IO queue size 128 less than required 00:26:46.310 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:46.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:46.310 Initialization complete. Launching workers. 00:26:46.310 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28726 00:26:46.310 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28783, failed to submit 66 00:26:46.310 success 28726, unsuccessful 57, failed 0 00:26:46.310 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:46.310 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.310 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:46.310 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.310 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:46.310 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:46.310 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:46.310 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:46.310 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:46.310 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:46.310 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:46.310 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:46.310 rmmod nvme_tcp 00:26:46.310 rmmod nvme_fabrics 00:26:46.310 rmmod nvme_keyring 00:26:46.568 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:46.568 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:46.568 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:46.568 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1139468 ']' 00:26:46.568 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1139468 00:26:46.568 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1139468 ']' 00:26:46.568 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1139468 00:26:46.568 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:26:46.568 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.568 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1139468 00:26:46.568 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:46.568 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:46.568 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1139468' 00:26:46.568 killing process with pid 1139468 00:26:46.568 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1139468 00:26:46.568 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1139468 00:26:46.827 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:46.827 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:46.827 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:46.827 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:26:46.827 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:26:46.827 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:46.827 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:26:46.827 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:46.827 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:46.827 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.827 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.827 12:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.733 12:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:48.733 00:26:48.733 real 0m7.375s 00:26:48.733 user 0m8.971s 00:26:48.733 sys 0m3.011s 00:26:48.733 12:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:48.733 12:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:48.733 ************************************ 00:26:48.733 END TEST nvmf_abort 00:26:48.733 ************************************ 00:26:48.733 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:48.733 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:48.733 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:48.733 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:48.733 ************************************ 00:26:48.733 START TEST nvmf_ns_hotplug_stress 00:26:48.733 ************************************ 00:26:48.733 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:48.733 * Looking for test storage... 00:26:48.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:48.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.992 --rc genhtml_branch_coverage=1 00:26:48.992 --rc genhtml_function_coverage=1 00:26:48.992 --rc genhtml_legend=1 00:26:48.992 --rc geninfo_all_blocks=1 00:26:48.992 --rc geninfo_unexecuted_blocks=1 00:26:48.992 00:26:48.992 ' 00:26:48.992 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:48.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.992 --rc genhtml_branch_coverage=1 00:26:48.992 --rc genhtml_function_coverage=1 00:26:48.992 --rc genhtml_legend=1 00:26:48.992 --rc geninfo_all_blocks=1 00:26:48.992 --rc geninfo_unexecuted_blocks=1 00:26:48.992 00:26:48.992 ' 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:48.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.993 --rc genhtml_branch_coverage=1 00:26:48.993 --rc genhtml_function_coverage=1 00:26:48.993 --rc genhtml_legend=1 00:26:48.993 --rc geninfo_all_blocks=1 00:26:48.993 --rc geninfo_unexecuted_blocks=1 00:26:48.993 00:26:48.993 ' 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:48.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.993 --rc genhtml_branch_coverage=1 00:26:48.993 --rc genhtml_function_coverage=1 00:26:48.993 --rc genhtml_legend=1 00:26:48.993 --rc geninfo_all_blocks=1 00:26:48.993 --rc geninfo_unexecuted_blocks=1 00:26:48.993 00:26:48.993 ' 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:26:48.993 12:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:50.899 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.900 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.900 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.900 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.900 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:50.900 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:50.900 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:50.900 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:50.900 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:50.900 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:50.900 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.900 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:50.900 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:50.900 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:51.159 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:51.159 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:51.159 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:51.159 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:51.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:26:51.160 00:26:51.160 --- 10.0.0.2 ping statistics --- 00:26:51.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.160 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:26:51.160 00:26:51.160 --- 10.0.0.1 ping statistics --- 00:26:51.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.160 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1141699 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1141699 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1141699 ']' 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:51.160 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:51.160 [2024-11-15 12:48:31.463841] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:51.160 [2024-11-15 12:48:31.464913] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:26:51.160 [2024-11-15 12:48:31.464981] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.418 [2024-11-15 12:48:31.536889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:51.418 [2024-11-15 12:48:31.594121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.418 [2024-11-15 12:48:31.594176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.418 [2024-11-15 12:48:31.594205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.418 [2024-11-15 12:48:31.594216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.418 [2024-11-15 12:48:31.594226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.418 [2024-11-15 12:48:31.595780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.418 [2024-11-15 12:48:31.595852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.418 [2024-11-15 12:48:31.595856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.418 [2024-11-15 12:48:31.691896] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:51.418 [2024-11-15 12:48:31.692139] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:51.418 [2024-11-15 12:48:31.692150] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:51.418 [2024-11-15 12:48:31.692411] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:51.418 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:51.418 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:26:51.418 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:51.418 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:51.418 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:51.418 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.418 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:26:51.418 12:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:51.677 [2024-11-15 12:48:32.008540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.935 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:52.194 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.452 [2024-11-15 12:48:32.565033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.452 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:52.710 12:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:26:52.968 Malloc0 00:26:52.968 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:53.228 Delay0 00:26:53.228 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:53.486 12:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:26:53.743 NULL1 00:26:53.743 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:54.308 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1142110 00:26:54.308 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:26:54.308 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:26:54.309 12:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:55.241 Read completed with error (sct=0, sc=11) 00:26:55.241 12:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:55.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.756 12:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:26:55.756 12:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:26:56.014 true 00:26:56.014 12:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:26:56.014 12:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:56.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:56.947 12:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:56.947 12:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:26:56.948 12:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:26:57.205 true 00:26:57.205 12:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:26:57.205 12:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:57.462 12:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:57.719 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:26:57.720 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:26:57.977 true 00:26:57.977 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:26:57.977 12:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:58.908 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:58.909 12:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:59.165 12:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:26:59.165 12:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:26:59.422 true 00:26:59.422 12:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:26:59.423 12:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:59.681 12:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:59.938 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:26:59.938 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:00.195 true 00:27:00.195 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:00.195 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:00.453 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:00.710 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:00.710 12:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:00.967 true 00:27:00.967 12:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:00.967 12:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:01.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:01.982 12:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:01.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:02.240 12:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:02.240 12:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:02.497 true 00:27:02.497 12:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:02.497 12:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:02.755 12:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:03.013 12:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:03.013 12:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:03.270 true 00:27:03.270 12:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:03.270 12:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:04.200 12:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:04.458 12:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:04.458 12:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:04.714 true 00:27:04.714 12:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:04.714 12:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:04.970 12:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:05.227 12:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:05.227 12:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:05.485 true 00:27:05.485 12:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:05.485 12:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:05.743 12:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:06.000 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:06.000 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:06.258 true 00:27:06.258 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:06.258 12:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:07.190 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:07.448 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:07.448 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:07.705 true 00:27:07.705 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:07.705 12:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:07.962 12:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:08.221 12:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:08.221 12:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:08.478 true 00:27:08.478 12:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:08.478 12:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.736 12:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:08.993 12:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:08.993 12:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:09.251 true 00:27:09.508 12:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:09.508 12:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:10.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:10.440 12:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:10.698 12:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:10.698 12:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:10.960 true 00:27:10.960 12:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:10.960 12:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:11.217 12:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:11.476 12:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:11.476 12:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:11.734 true 00:27:11.734 12:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:11.734 12:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:11.992 12:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:12.251 12:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:12.251 12:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:12.509 true 00:27:12.509 12:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:12.509 12:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:13.884 12:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:13.884 12:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:13.884 12:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:14.142 true 00:27:14.142 12:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:14.142 12:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:14.400 12:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:14.657 12:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:14.657 12:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:14.915 true 00:27:14.915 12:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:14.915 12:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:15.172 12:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:15.430 12:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:15.430 12:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:15.688 true 00:27:15.688 12:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:15.688 12:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:16.621 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:16.621 12:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:16.621 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:16.878 12:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:16.878 12:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:17.135 true 00:27:17.136 12:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:17.136 12:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:17.393 12:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:17.651 12:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:17.651 12:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:17.909 true 00:27:17.909 12:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:17.909 12:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:18.167 12:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:18.425 12:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:18.425 12:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:18.682 true 00:27:18.939 12:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:18.939 12:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:19.872 12:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:20.129 12:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:20.129 12:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:20.387 true 00:27:20.387 12:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:20.387 12:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:20.644 12:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:20.901 12:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:20.901 12:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:21.159 true 00:27:21.159 12:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:21.159 12:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:21.416 12:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:21.674 12:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:27:21.674 12:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:27:21.932 true 00:27:21.932 12:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:21.932 12:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:22.864 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:23.122 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:27:23.122 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:27:23.380 true 00:27:23.380 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:23.380 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:23.636 12:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:23.893 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:23.893 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:24.151 true 00:27:24.151 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:24.151 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:24.409 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:24.409 Initializing NVMe Controllers 00:27:24.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:24.409 Controller IO queue size 128, less than required. 00:27:24.409 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:24.409 Controller IO queue size 128, less than required. 00:27:24.409 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:24.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:24.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:24.409 Initialization complete. Launching workers. 00:27:24.409 ======================================================== 00:27:24.409 Latency(us) 00:27:24.409 Device Information : IOPS MiB/s Average min max 00:27:24.409 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 578.40 0.28 91547.85 3446.14 1018852.30 00:27:24.409 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8591.64 4.20 14900.13 2256.01 445448.03 00:27:24.409 ======================================================== 00:27:24.409 Total : 9170.04 4.48 19734.69 2256.01 1018852.30 00:27:24.409 00:27:24.667 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:27:24.667 12:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:27:24.925 true 00:27:24.925 12:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1142110 00:27:24.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1142110) - No such process 00:27:24.926 12:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1142110 00:27:24.926 12:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:25.183 12:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:25.440 12:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:27:25.440 12:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:27:25.440 12:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:27:25.440 12:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:25.440 12:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:27:25.699 null0 00:27:25.699 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:25.699 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:25.699 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:27:25.957 null1 00:27:25.957 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:25.957 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:25.957 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:27:26.214 null2 00:27:26.215 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:26.215 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:26.215 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:27:26.474 null3 00:27:26.474 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:26.474 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:26.474 12:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:27:26.733 null4 00:27:26.991 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:26.991 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:26.991 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:27:27.249 null5 00:27:27.249 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:27.249 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:27.249 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:27:27.507 null6 00:27:27.507 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:27.507 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:27.507 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:27:27.766 null7 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:27.766 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:27.767 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:27.767 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:27:27.767 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:27.767 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:27.767 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:27:27.767 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1146087 1146089 1146092 1146095 1146098 1146101 1146104 1146107 00:27:27.767 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:27.767 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:27.767 12:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:28.025 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:28.025 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:28.025 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:28.025 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:28.025 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:28.025 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:28.025 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:28.025 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.283 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:28.541 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:28.541 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:28.541 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:28.541 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:28.541 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:28.541 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:28.541 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:28.541 12:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:28.799 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:29.058 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:29.058 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:29.058 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:29.058 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:29.058 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:29.058 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:29.058 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:29.316 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.575 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:29.832 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:29.832 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:29.832 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:29.832 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:29.832 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:29.832 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:29.832 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:29.832 12:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.090 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:30.348 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:30.348 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:30.348 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:30.348 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:30.348 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:30.348 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:30.348 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:30.348 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.606 12:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:30.864 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:30.864 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:30.864 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:30.864 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:30.864 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:30.864 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:30.864 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:30.864 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.431 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:31.690 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:31.690 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:31.690 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:31.690 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:31.690 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:31.690 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:31.690 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:31.690 12:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:31.948 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.948 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.948 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:31.948 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.948 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.948 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:31.948 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.948 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.948 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:31.948 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.948 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.948 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:31.949 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.949 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.949 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:31.949 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.949 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.949 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:31.949 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.949 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.949 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:31.949 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.949 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.949 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:32.206 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:32.206 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:32.206 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:32.206 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:32.207 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:32.207 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:32.207 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:32.207 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.465 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:32.724 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:32.724 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:32.724 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:32.724 12:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:32.724 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:32.724 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:32.724 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:32.724 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.983 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:33.242 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.242 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.242 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:33.500 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:33.500 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:33.500 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:33.500 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:33.500 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:33.500 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:33.500 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:33.500 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:33.758 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.758 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.758 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.758 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.758 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.758 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.758 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:33.759 rmmod nvme_tcp 00:27:33.759 rmmod nvme_fabrics 00:27:33.759 rmmod nvme_keyring 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1141699 ']' 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1141699 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1141699 ']' 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1141699 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.759 12:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1141699 00:27:33.759 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:33.759 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:33.759 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1141699' 00:27:33.759 killing process with pid 1141699 00:27:33.759 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1141699 00:27:33.759 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1141699 00:27:34.018 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:34.018 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:34.018 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:34.018 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:27:34.018 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:27:34.018 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:34.018 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:27:34.018 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:34.018 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:34.018 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.018 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.018 12:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.569 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:36.569 00:27:36.569 real 0m47.288s 00:27:36.569 user 3m18.502s 00:27:36.569 sys 0m21.347s 00:27:36.569 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:36.569 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:36.569 ************************************ 00:27:36.569 END TEST nvmf_ns_hotplug_stress 00:27:36.569 ************************************ 00:27:36.569 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:36.569 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:36.569 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:36.569 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:36.569 ************************************ 00:27:36.569 START TEST nvmf_delete_subsystem 00:27:36.569 ************************************ 00:27:36.569 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:36.570 * Looking for test storage... 00:27:36.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:36.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.570 --rc genhtml_branch_coverage=1 00:27:36.570 --rc genhtml_function_coverage=1 00:27:36.570 --rc genhtml_legend=1 00:27:36.570 --rc geninfo_all_blocks=1 00:27:36.570 --rc geninfo_unexecuted_blocks=1 00:27:36.570 00:27:36.570 ' 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:36.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.570 --rc genhtml_branch_coverage=1 00:27:36.570 --rc genhtml_function_coverage=1 00:27:36.570 --rc genhtml_legend=1 00:27:36.570 --rc geninfo_all_blocks=1 00:27:36.570 --rc geninfo_unexecuted_blocks=1 00:27:36.570 00:27:36.570 ' 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:36.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.570 --rc genhtml_branch_coverage=1 00:27:36.570 --rc genhtml_function_coverage=1 00:27:36.570 --rc genhtml_legend=1 00:27:36.570 --rc geninfo_all_blocks=1 00:27:36.570 --rc geninfo_unexecuted_blocks=1 00:27:36.570 00:27:36.570 ' 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:36.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.570 --rc genhtml_branch_coverage=1 00:27:36.570 --rc genhtml_function_coverage=1 00:27:36.570 --rc genhtml_legend=1 00:27:36.570 --rc geninfo_all_blocks=1 00:27:36.570 --rc geninfo_unexecuted_blocks=1 00:27:36.570 00:27:36.570 ' 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.570 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:27:36.571 12:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.472 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:38.473 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:38.473 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:38.473 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:38.473 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.473 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:38.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:27:38.732 00:27:38.732 --- 10.0.0.2 ping statistics --- 00:27:38.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.732 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:27:38.732 00:27:38.732 --- 10.0.0.1 ping statistics --- 00:27:38.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.732 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1148877 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1148877 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1148877 ']' 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.732 12:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:38.732 [2024-11-15 12:49:18.902438] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:38.732 [2024-11-15 12:49:18.903446] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:27:38.732 [2024-11-15 12:49:18.903507] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.732 [2024-11-15 12:49:18.972225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:38.732 [2024-11-15 12:49:19.027669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.732 [2024-11-15 12:49:19.027760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.732 [2024-11-15 12:49:19.027775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.733 [2024-11-15 12:49:19.027787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.733 [2024-11-15 12:49:19.027812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.733 [2024-11-15 12:49:19.029138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.733 [2024-11-15 12:49:19.029143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.991 [2024-11-15 12:49:19.114143] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:38.991 [2024-11-15 12:49:19.114149] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:38.991 [2024-11-15 12:49:19.114417] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:38.991 [2024-11-15 12:49:19.165819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:38.991 [2024-11-15 12:49:19.186037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:38.991 NULL1 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:38.991 Delay0 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1148954 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:38.991 12:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:27:38.991 [2024-11-15 12:49:19.265832] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:40.890 12:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:40.890 12:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.890 12:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 starting I/O failed: -6 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 starting I/O failed: -6 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Write completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 starting I/O failed: -6 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Write completed with error (sct=0, sc=8) 00:27:41.148 Write completed with error (sct=0, sc=8) 00:27:41.148 starting I/O failed: -6 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 starting I/O failed: -6 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Write completed with error (sct=0, sc=8) 00:27:41.148 Write completed with error (sct=0, sc=8) 00:27:41.148 starting I/O failed: -6 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Write completed with error (sct=0, sc=8) 00:27:41.148 starting I/O failed: -6 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Write completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 starting I/O failed: -6 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 starting I/O failed: -6 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Write completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 [2024-11-15 12:49:21.388948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0d74000c40 is same with the state(6) to be set 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.148 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 starting I/O failed: -6 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 starting I/O failed: -6 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 starting I/O failed: -6 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 starting I/O failed: -6 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 starting I/O failed: -6 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 starting I/O failed: -6 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 starting I/O failed: -6 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 starting I/O failed: -6 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 starting I/O failed: -6 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 starting I/O failed: -6 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 starting I/O failed: -6 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 starting I/O failed: -6 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 starting I/O failed: -6 00:27:41.149 Read completed with error (sct=0, sc=8) 00:27:41.149 Write completed with error (sct=0, sc=8) 00:27:41.149 starting I/O failed: -6 00:27:41.149 starting I/O failed: -6 00:27:41.149 starting I/O failed: -6 00:27:41.149 starting I/O failed: -6 00:27:41.149 starting I/O failed: -6 00:27:42.084 [2024-11-15 12:49:22.363151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bb9a0 is same with the state(6) to be set 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 [2024-11-15 12:49:22.391380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0d7400d350 is same with the state(6) to be set 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 [2024-11-15 12:49:22.391700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ba2c0 is same with the state(6) to be set 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 [2024-11-15 12:49:22.392250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ba4a0 is same with the state(6) to be set 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Read completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 Write completed with error (sct=0, sc=8) 00:27:42.084 [2024-11-15 12:49:22.392496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ba860 is same with the state(6) to be set 00:27:42.084 Initializing NVMe Controllers 00:27:42.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:42.084 Controller IO queue size 128, less than required. 00:27:42.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:42.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:42.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:42.084 Initialization complete. Launching workers. 00:27:42.084 ======================================================== 00:27:42.084 Latency(us) 00:27:42.084 Device Information : IOPS MiB/s Average min max 00:27:42.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 190.87 0.09 956147.62 776.41 1013740.44 00:27:42.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.22 0.07 897939.70 404.00 1014137.03 00:27:42.084 ======================================================== 00:27:42.084 Total : 341.09 0.17 930512.45 404.00 1014137.03 00:27:42.084 00:27:42.084 [2024-11-15 12:49:22.393385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bb9a0 (9): Bad file descriptor 00:27:42.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:42.085 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.085 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:27:42.085 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1148954 00:27:42.085 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1148954 00:27:42.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1148954) - No such process 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1148954 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1148954 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1148954 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:42.652 [2024-11-15 12:49:22.913942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1149422 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1149422 00:27:42.652 12:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:42.652 [2024-11-15 12:49:22.978772] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:43.217 12:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:43.217 12:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1149422 00:27:43.217 12:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:43.782 12:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:43.782 12:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1149422 00:27:43.782 12:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:44.348 12:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:44.348 12:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1149422 00:27:44.348 12:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:44.604 12:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:44.604 12:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1149422 00:27:44.604 12:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:45.169 12:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:45.169 12:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1149422 00:27:45.169 12:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:45.736 12:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:45.736 12:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1149422 00:27:45.736 12:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:45.994 Initializing NVMe Controllers 00:27:45.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:45.994 Controller IO queue size 128, less than required. 00:27:45.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:45.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:45.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:45.994 Initialization complete. Launching workers. 00:27:45.994 ======================================================== 00:27:45.994 Latency(us) 00:27:45.994 Device Information : IOPS MiB/s Average min max 00:27:45.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004802.86 1000206.81 1042758.45 00:27:45.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005265.36 1000525.83 1013591.12 00:27:45.994 ======================================================== 00:27:45.994 Total : 256.00 0.12 1005034.11 1000206.81 1042758.45 00:27:45.994 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1149422 00:27:46.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1149422) - No such process 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1149422 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:46.252 rmmod nvme_tcp 00:27:46.252 rmmod nvme_fabrics 00:27:46.252 rmmod nvme_keyring 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1148877 ']' 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1148877 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1148877 ']' 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1148877 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1148877 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1148877' 00:27:46.252 killing process with pid 1148877 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1148877 00:27:46.252 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1148877 00:27:46.511 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:46.511 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:46.511 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:46.511 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:27:46.511 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:27:46.511 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:46.511 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:27:46.511 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:46.512 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:46.512 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.512 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:46.512 12:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.053 12:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:49.053 00:27:49.053 real 0m12.469s 00:27:49.053 user 0m24.705s 00:27:49.053 sys 0m3.722s 00:27:49.053 12:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:49.053 12:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:49.053 ************************************ 00:27:49.053 END TEST nvmf_delete_subsystem 00:27:49.053 ************************************ 00:27:49.053 12:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:49.053 12:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:49.053 12:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:49.053 12:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:49.053 ************************************ 00:27:49.053 START TEST nvmf_host_management 00:27:49.054 ************************************ 00:27:49.054 12:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:49.054 * Looking for test storage... 00:27:49.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:49.054 12:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:49.054 12:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:27:49.054 12:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:49.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.054 --rc genhtml_branch_coverage=1 00:27:49.054 --rc genhtml_function_coverage=1 00:27:49.054 --rc genhtml_legend=1 00:27:49.054 --rc geninfo_all_blocks=1 00:27:49.054 --rc geninfo_unexecuted_blocks=1 00:27:49.054 00:27:49.054 ' 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:49.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.054 --rc genhtml_branch_coverage=1 00:27:49.054 --rc genhtml_function_coverage=1 00:27:49.054 --rc genhtml_legend=1 00:27:49.054 --rc geninfo_all_blocks=1 00:27:49.054 --rc geninfo_unexecuted_blocks=1 00:27:49.054 00:27:49.054 ' 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:49.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.054 --rc genhtml_branch_coverage=1 00:27:49.054 --rc genhtml_function_coverage=1 00:27:49.054 --rc genhtml_legend=1 00:27:49.054 --rc geninfo_all_blocks=1 00:27:49.054 --rc geninfo_unexecuted_blocks=1 00:27:49.054 00:27:49.054 ' 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:49.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.054 --rc genhtml_branch_coverage=1 00:27:49.054 --rc genhtml_function_coverage=1 00:27:49.054 --rc genhtml_legend=1 00:27:49.054 --rc geninfo_all_blocks=1 00:27:49.054 --rc geninfo_unexecuted_blocks=1 00:27:49.054 00:27:49.054 ' 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.054 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:27:49.055 12:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:50.961 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:50.961 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:50.961 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:50.961 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:50.961 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:50.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:27:50.962 00:27:50.962 --- 10.0.0.2 ping statistics --- 00:27:50.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.962 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:50.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:27:50.962 00:27:50.962 --- 10.0.0.1 ping statistics --- 00:27:50.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.962 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1151772 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1151772 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1151772 ']' 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:50.962 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:50.962 [2024-11-15 12:49:31.196198] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:50.962 [2024-11-15 12:49:31.197269] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:27:50.962 [2024-11-15 12:49:31.197320] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.962 [2024-11-15 12:49:31.267636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.221 [2024-11-15 12:49:31.327170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.221 [2024-11-15 12:49:31.327216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.221 [2024-11-15 12:49:31.327239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.221 [2024-11-15 12:49:31.327249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.221 [2024-11-15 12:49:31.327258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.221 [2024-11-15 12:49:31.328853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.221 [2024-11-15 12:49:31.328905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.221 [2024-11-15 12:49:31.328927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:51.221 [2024-11-15 12:49:31.328932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.221 [2024-11-15 12:49:31.414253] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:51.221 [2024-11-15 12:49:31.414489] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:51.221 [2024-11-15 12:49:31.414804] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:51.221 [2024-11-15 12:49:31.415368] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:51.221 [2024-11-15 12:49:31.415586] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:51.221 [2024-11-15 12:49:31.461628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:51.221 Malloc0 00:27:51.221 [2024-11-15 12:49:31.533801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:51.221 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1151818 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1151818 /var/tmp/bdevperf.sock 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1151818 ']' 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:51.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:51.480 { 00:27:51.480 "params": { 00:27:51.480 "name": "Nvme$subsystem", 00:27:51.480 "trtype": "$TEST_TRANSPORT", 00:27:51.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.480 "adrfam": "ipv4", 00:27:51.480 "trsvcid": "$NVMF_PORT", 00:27:51.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.480 "hdgst": ${hdgst:-false}, 00:27:51.480 "ddgst": ${ddgst:-false} 00:27:51.480 }, 00:27:51.480 "method": "bdev_nvme_attach_controller" 00:27:51.480 } 00:27:51.480 EOF 00:27:51.480 )") 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:51.480 12:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:51.480 "params": { 00:27:51.480 "name": "Nvme0", 00:27:51.480 "trtype": "tcp", 00:27:51.480 "traddr": "10.0.0.2", 00:27:51.480 "adrfam": "ipv4", 00:27:51.480 "trsvcid": "4420", 00:27:51.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:51.480 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:51.480 "hdgst": false, 00:27:51.480 "ddgst": false 00:27:51.480 }, 00:27:51.480 "method": "bdev_nvme_attach_controller" 00:27:51.480 }' 00:27:51.480 [2024-11-15 12:49:31.611193] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:27:51.480 [2024-11-15 12:49:31.611289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1151818 ] 00:27:51.480 [2024-11-15 12:49:31.682559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.480 [2024-11-15 12:49:31.743401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.739 Running I/O for 10 seconds... 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:27:51.997 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:27:52.257 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:27:52.257 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:52.257 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:52.257 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:52.257 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.257 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:52.257 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.257 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:27:52.257 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:27:52.257 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:27:52.257 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:27:52.257 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:27:52.257 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:52.257 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.257 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:52.257 [2024-11-15 12:49:32.446003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.257 [2024-11-15 12:49:32.446069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.257 [2024-11-15 12:49:32.446106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.257 [2024-11-15 12:49:32.446123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.257 [2024-11-15 12:49:32.446150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.257 [2024-11-15 12:49:32.446165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.257 [2024-11-15 12:49:32.446180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.257 [2024-11-15 12:49:32.446195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.257 [2024-11-15 12:49:32.446218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.257 [2024-11-15 12:49:32.446232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.257 [2024-11-15 12:49:32.446247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.257 [2024-11-15 12:49:32.446260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.257 [2024-11-15 12:49:32.446275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.257 [2024-11-15 12:49:32.446289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.257 [2024-11-15 12:49:32.446314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.257 [2024-11-15 12:49:32.446329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.257 [2024-11-15 12:49:32.446343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.257 [2024-11-15 12:49:32.446357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.257 [2024-11-15 12:49:32.446372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.257 [2024-11-15 12:49:32.446386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.257 [2024-11-15 12:49:32.446401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.257 [2024-11-15 12:49:32.446415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.257 [2024-11-15 12:49:32.446430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.257 [2024-11-15 12:49:32.446444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.257 [2024-11-15 12:49:32.446459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.257 [2024-11-15 12:49:32.446474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.257 [2024-11-15 12:49:32.446489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.257 [2024-11-15 12:49:32.446502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.257 [2024-11-15 12:49:32.446517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.446545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.446575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.446604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.446636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.446665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.446698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.446738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.446778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.446806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.446835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.446864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.446893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.446921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.446950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.446977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.446991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.258 [2024-11-15 12:49:32.447629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.258 [2024-11-15 12:49:32.447642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.259 [2024-11-15 12:49:32.447656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.259 [2024-11-15 12:49:32.447670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.259 [2024-11-15 12:49:32.447685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.259 [2024-11-15 12:49:32.447698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.259 [2024-11-15 12:49:32.447713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.259 [2024-11-15 12:49:32.447744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.259 [2024-11-15 12:49:32.447761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.259 [2024-11-15 12:49:32.447779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.259 [2024-11-15 12:49:32.447794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.259 [2024-11-15 12:49:32.447808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.259 [2024-11-15 12:49:32.447827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.259 [2024-11-15 12:49:32.447841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.259 [2024-11-15 12:49:32.447856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.259 [2024-11-15 12:49:32.447870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.259 [2024-11-15 12:49:32.447885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.259 [2024-11-15 12:49:32.447898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.259 [2024-11-15 12:49:32.447913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.259 [2024-11-15 12:49:32.447927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.259 [2024-11-15 12:49:32.447942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.259 [2024-11-15 12:49:32.447956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.259 [2024-11-15 12:49:32.447971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.259 [2024-11-15 12:49:32.447984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.259 [2024-11-15 12:49:32.447999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.259 [2024-11-15 12:49:32.448012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.259 [2024-11-15 12:49:32.449278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:52.259 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.259 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:52.259 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.259 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:52.259 task offset: 82688 on job bdev=Nvme0n1 fails 00:27:52.259 00:27:52.259 Latency(us) 00:27:52.259 [2024-11-15T11:49:32.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.259 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.259 Job: Nvme0n1 ended in about 0.40 seconds with error 00:27:52.259 Verification LBA range: start 0x0 length 0x400 00:27:52.259 Nvme0n1 : 0.40 1580.99 98.81 158.10 0.00 35750.63 2694.26 34758.35 00:27:52.259 [2024-11-15T11:49:32.603Z] =================================================================================================================== 00:27:52.259 [2024-11-15T11:49:32.603Z] Total : 1580.99 98.81 158.10 0.00 35750.63 2694.26 34758.35 00:27:52.259 [2024-11-15 12:49:32.451205] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:52.259 [2024-11-15 12:49:32.451234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b6a40 (9): Bad file descriptor 00:27:52.259 [2024-11-15 12:49:32.452343] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:27:52.259 [2024-11-15 12:49:32.452440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:52.259 [2024-11-15 12:49:32.452469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.259 [2024-11-15 12:49:32.452491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:27:52.259 [2024-11-15 12:49:32.452506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:27:52.259 [2024-11-15 12:49:32.452520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.259 [2024-11-15 12:49:32.452533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11b6a40 00:27:52.259 [2024-11-15 12:49:32.452568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b6a40 (9): Bad file descriptor 00:27:52.259 [2024-11-15 12:49:32.452593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:52.259 [2024-11-15 12:49:32.452608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:52.259 [2024-11-15 12:49:32.452625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:52.259 [2024-11-15 12:49:32.452642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:52.259 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.259 12:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:27:53.193 12:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1151818 00:27:53.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1151818) - No such process 00:27:53.193 12:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:27:53.193 12:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:27:53.193 12:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:53.193 12:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:27:53.193 12:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:53.193 12:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:53.193 12:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.193 12:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.193 { 00:27:53.193 "params": { 00:27:53.193 "name": "Nvme$subsystem", 00:27:53.193 "trtype": "$TEST_TRANSPORT", 00:27:53.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.193 "adrfam": "ipv4", 00:27:53.193 "trsvcid": "$NVMF_PORT", 00:27:53.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.193 "hdgst": ${hdgst:-false}, 00:27:53.193 "ddgst": ${ddgst:-false} 00:27:53.193 }, 00:27:53.193 "method": "bdev_nvme_attach_controller" 00:27:53.193 } 00:27:53.193 EOF 00:27:53.193 )") 00:27:53.193 12:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:53.193 12:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:53.193 12:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:53.193 12:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:53.193 "params": { 00:27:53.193 "name": "Nvme0", 00:27:53.193 "trtype": "tcp", 00:27:53.193 "traddr": "10.0.0.2", 00:27:53.193 "adrfam": "ipv4", 00:27:53.193 "trsvcid": "4420", 00:27:53.193 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:53.193 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:53.193 "hdgst": false, 00:27:53.193 "ddgst": false 00:27:53.193 }, 00:27:53.193 "method": "bdev_nvme_attach_controller" 00:27:53.193 }' 00:27:53.193 [2024-11-15 12:49:33.508407] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:27:53.193 [2024-11-15 12:49:33.508489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152091 ] 00:27:53.451 [2024-11-15 12:49:33.577852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.451 [2024-11-15 12:49:33.635973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.708 Running I/O for 1 seconds... 00:27:54.898 1600.00 IOPS, 100.00 MiB/s 00:27:54.898 Latency(us) 00:27:54.898 [2024-11-15T11:49:35.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.898 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.898 Verification LBA range: start 0x0 length 0x400 00:27:54.898 Nvme0n1 : 1.01 1643.71 102.73 0.00 0.00 38304.61 4587.52 34564.17 00:27:54.898 [2024-11-15T11:49:35.242Z] =================================================================================================================== 00:27:54.898 [2024-11-15T11:49:35.242Z] Total : 1643.71 102.73 0.00 0.00 38304.61 4587.52 34564.17 00:27:54.898 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:27:54.898 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:27:54.898 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:54.898 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:54.898 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:27:54.898 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:54.898 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:27:54.898 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:54.898 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:27:54.898 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:54.898 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:54.898 rmmod nvme_tcp 00:27:54.898 rmmod nvme_fabrics 00:27:54.898 rmmod nvme_keyring 00:27:55.157 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:55.157 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:27:55.157 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:27:55.157 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1151772 ']' 00:27:55.157 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1151772 00:27:55.157 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1151772 ']' 00:27:55.157 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1151772 00:27:55.157 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:27:55.157 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.157 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1151772 00:27:55.157 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:55.157 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:55.157 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1151772' 00:27:55.157 killing process with pid 1151772 00:27:55.157 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1151772 00:27:55.157 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1151772 00:27:55.157 [2024-11-15 12:49:35.482552] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:27:55.417 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:55.417 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:55.417 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:55.417 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:27:55.417 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:27:55.417 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:55.417 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:27:55.417 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:55.417 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:55.417 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.417 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.417 12:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.322 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:57.322 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:57.322 00:27:57.322 real 0m8.663s 00:27:57.323 user 0m17.700s 00:27:57.323 sys 0m3.729s 00:27:57.323 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:57.323 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:57.323 ************************************ 00:27:57.323 END TEST nvmf_host_management 00:27:57.323 ************************************ 00:27:57.323 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:57.323 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:57.323 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:57.323 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:57.323 ************************************ 00:27:57.323 START TEST nvmf_lvol 00:27:57.323 ************************************ 00:27:57.323 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:57.323 * Looking for test storage... 00:27:57.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:57.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.582 --rc genhtml_branch_coverage=1 00:27:57.582 --rc genhtml_function_coverage=1 00:27:57.582 --rc genhtml_legend=1 00:27:57.582 --rc geninfo_all_blocks=1 00:27:57.582 --rc geninfo_unexecuted_blocks=1 00:27:57.582 00:27:57.582 ' 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:57.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.582 --rc genhtml_branch_coverage=1 00:27:57.582 --rc genhtml_function_coverage=1 00:27:57.582 --rc genhtml_legend=1 00:27:57.582 --rc geninfo_all_blocks=1 00:27:57.582 --rc geninfo_unexecuted_blocks=1 00:27:57.582 00:27:57.582 ' 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:57.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.582 --rc genhtml_branch_coverage=1 00:27:57.582 --rc genhtml_function_coverage=1 00:27:57.582 --rc genhtml_legend=1 00:27:57.582 --rc geninfo_all_blocks=1 00:27:57.582 --rc geninfo_unexecuted_blocks=1 00:27:57.582 00:27:57.582 ' 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:57.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.582 --rc genhtml_branch_coverage=1 00:27:57.582 --rc genhtml_function_coverage=1 00:27:57.582 --rc genhtml_legend=1 00:27:57.582 --rc geninfo_all_blocks=1 00:27:57.582 --rc geninfo_unexecuted_blocks=1 00:27:57.582 00:27:57.582 ' 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.582 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:27:57.583 12:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:59.517 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:59.517 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:59.517 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:59.517 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:59.518 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:59.518 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:59.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:27:59.788 00:27:59.788 --- 10.0.0.2 ping statistics --- 00:27:59.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.788 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:27:59.788 00:27:59.788 --- 10.0.0.1 ping statistics --- 00:27:59.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.788 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:59.788 12:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:59.789 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:27:59.789 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:59.789 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:59.789 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:59.789 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1154291 00:27:59.789 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:27:59.789 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1154291 00:27:59.789 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1154291 ']' 00:27:59.789 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.789 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:59.789 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.789 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:59.789 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:59.789 [2024-11-15 12:49:40.066117] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:59.789 [2024-11-15 12:49:40.067220] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:27:59.789 [2024-11-15 12:49:40.067294] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.069 [2024-11-15 12:49:40.147202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:00.069 [2024-11-15 12:49:40.209172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.069 [2024-11-15 12:49:40.209231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.069 [2024-11-15 12:49:40.209261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.069 [2024-11-15 12:49:40.209273] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.069 [2024-11-15 12:49:40.209282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.069 [2024-11-15 12:49:40.210811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.069 [2024-11-15 12:49:40.210854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.069 [2024-11-15 12:49:40.210859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.069 [2024-11-15 12:49:40.309743] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:00.069 [2024-11-15 12:49:40.310025] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:00.069 [2024-11-15 12:49:40.310034] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:00.069 [2024-11-15 12:49:40.310306] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:00.069 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:00.069 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:28:00.069 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:00.069 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:00.069 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:00.069 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.069 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:00.331 [2024-11-15 12:49:40.611560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.331 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:00.897 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:00.897 12:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:01.156 12:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:01.156 12:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:01.414 12:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:01.672 12:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=642a670c-56c2-4da7-b602-c4ad755cf16a 00:28:01.672 12:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 642a670c-56c2-4da7-b602-c4ad755cf16a lvol 20 00:28:01.930 12:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c86e86db-fec4-4cf0-9abf-26229aeea7e9 00:28:01.930 12:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:02.188 12:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c86e86db-fec4-4cf0-9abf-26229aeea7e9 00:28:02.447 12:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:02.706 [2024-11-15 12:49:42.907788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.706 12:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:02.964 12:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1154721 00:28:02.964 12:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:02.964 12:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:03.898 12:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c86e86db-fec4-4cf0-9abf-26229aeea7e9 MY_SNAPSHOT 00:28:04.464 12:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8178b2b1-eda1-4710-a60d-d48adf21a74a 00:28:04.464 12:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c86e86db-fec4-4cf0-9abf-26229aeea7e9 30 00:28:04.722 12:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8178b2b1-eda1-4710-a60d-d48adf21a74a MY_CLONE 00:28:04.980 12:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f2354df5-b742-4612-8224-2a1cea7704cb 00:28:04.980 12:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f2354df5-b742-4612-8224-2a1cea7704cb 00:28:05.545 12:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1154721 00:28:13.658 Initializing NVMe Controllers 00:28:13.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:13.658 Controller IO queue size 128, less than required. 00:28:13.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:13.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:13.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:13.658 Initialization complete. Launching workers. 00:28:13.658 ======================================================== 00:28:13.658 Latency(us) 00:28:13.658 Device Information : IOPS MiB/s Average min max 00:28:13.658 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10606.70 41.43 12075.12 1811.03 59490.04 00:28:13.658 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10418.10 40.70 12289.57 5686.26 89481.75 00:28:13.658 ======================================================== 00:28:13.658 Total : 21024.80 82.13 12181.38 1811.03 89481.75 00:28:13.658 00:28:13.658 12:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:13.658 12:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c86e86db-fec4-4cf0-9abf-26229aeea7e9 00:28:13.917 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 642a670c-56c2-4da7-b602-c4ad755cf16a 00:28:14.175 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:14.175 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:14.175 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:14.175 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:14.175 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:14.175 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:14.175 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:14.175 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:14.175 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:14.434 rmmod nvme_tcp 00:28:14.434 rmmod nvme_fabrics 00:28:14.434 rmmod nvme_keyring 00:28:14.434 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:14.434 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:14.434 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:14.434 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1154291 ']' 00:28:14.434 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1154291 00:28:14.434 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1154291 ']' 00:28:14.434 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1154291 00:28:14.434 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:28:14.434 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:14.434 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1154291 00:28:14.434 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:14.434 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:14.434 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1154291' 00:28:14.434 killing process with pid 1154291 00:28:14.434 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1154291 00:28:14.434 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1154291 00:28:14.692 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:14.692 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:14.692 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:14.692 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:14.692 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:28:14.692 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:14.692 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:28:14.692 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:14.692 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:14.692 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.692 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.692 12:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.599 12:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:16.599 00:28:16.599 real 0m19.296s 00:28:16.599 user 0m56.345s 00:28:16.599 sys 0m7.890s 00:28:16.599 12:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.599 12:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:16.599 ************************************ 00:28:16.599 END TEST nvmf_lvol 00:28:16.599 ************************************ 00:28:16.599 12:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:16.599 12:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:16.599 12:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.599 12:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:16.858 ************************************ 00:28:16.858 START TEST nvmf_lvs_grow 00:28:16.858 ************************************ 00:28:16.858 12:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:16.858 * Looking for test storage... 00:28:16.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:16.858 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:16.858 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:28:16.858 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:16.858 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:16.858 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:16.858 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:16.858 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:16.858 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:28:16.858 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:28:16.858 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:28:16.858 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:16.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.859 --rc genhtml_branch_coverage=1 00:28:16.859 --rc genhtml_function_coverage=1 00:28:16.859 --rc genhtml_legend=1 00:28:16.859 --rc geninfo_all_blocks=1 00:28:16.859 --rc geninfo_unexecuted_blocks=1 00:28:16.859 00:28:16.859 ' 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:16.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.859 --rc genhtml_branch_coverage=1 00:28:16.859 --rc genhtml_function_coverage=1 00:28:16.859 --rc genhtml_legend=1 00:28:16.859 --rc geninfo_all_blocks=1 00:28:16.859 --rc geninfo_unexecuted_blocks=1 00:28:16.859 00:28:16.859 ' 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:16.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.859 --rc genhtml_branch_coverage=1 00:28:16.859 --rc genhtml_function_coverage=1 00:28:16.859 --rc genhtml_legend=1 00:28:16.859 --rc geninfo_all_blocks=1 00:28:16.859 --rc geninfo_unexecuted_blocks=1 00:28:16.859 00:28:16.859 ' 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:16.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.859 --rc genhtml_branch_coverage=1 00:28:16.859 --rc genhtml_function_coverage=1 00:28:16.859 --rc genhtml_legend=1 00:28:16.859 --rc geninfo_all_blocks=1 00:28:16.859 --rc geninfo_unexecuted_blocks=1 00:28:16.859 00:28:16.859 ' 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:16.859 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:16.860 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:28:16.860 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:16.860 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.860 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:16.860 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:16.860 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:16.860 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.860 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.860 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.860 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:16.860 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:16.860 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:28:16.860 12:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:19.397 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.397 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:19.398 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:19.398 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:19.398 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:19.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:28:19.398 00:28:19.398 --- 10.0.0.2 ping statistics --- 00:28:19.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.398 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:28:19.398 00:28:19.398 --- 10.0.0.1 ping statistics --- 00:28:19.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.398 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1157974 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1157974 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1157974 ']' 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:19.398 [2024-11-15 12:49:59.407208] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:19.398 [2024-11-15 12:49:59.408294] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:28:19.398 [2024-11-15 12:49:59.408346] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.398 [2024-11-15 12:49:59.479665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.398 [2024-11-15 12:49:59.538532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.398 [2024-11-15 12:49:59.538584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.398 [2024-11-15 12:49:59.538597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.398 [2024-11-15 12:49:59.538608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.398 [2024-11-15 12:49:59.538617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.398 [2024-11-15 12:49:59.539271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.398 [2024-11-15 12:49:59.633757] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:19.398 [2024-11-15 12:49:59.634028] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.398 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:19.657 [2024-11-15 12:49:59.935903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.657 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:28:19.657 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:19.657 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.657 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:19.657 ************************************ 00:28:19.657 START TEST lvs_grow_clean 00:28:19.657 ************************************ 00:28:19.657 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:28:19.657 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:19.657 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:19.657 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:19.657 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:19.657 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:19.657 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:19.657 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:19.657 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:19.657 12:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:20.224 12:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:20.224 12:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:20.483 12:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e7d5fe8f-bd63-499a-8031-07b1ad99e38e 00:28:20.483 12:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d5fe8f-bd63-499a-8031-07b1ad99e38e 00:28:20.483 12:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:20.742 12:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:20.742 12:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:20.742 12:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e7d5fe8f-bd63-499a-8031-07b1ad99e38e lvol 150 00:28:21.001 12:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7d7ea070-03cd-4139-ab05-ed4ca01b3d49 00:28:21.001 12:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:21.001 12:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:21.261 [2024-11-15 12:50:01.379751] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:21.261 [2024-11-15 12:50:01.379843] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:21.261 true 00:28:21.261 12:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d5fe8f-bd63-499a-8031-07b1ad99e38e 00:28:21.261 12:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:21.522 12:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:21.522 12:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:21.781 12:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7d7ea070-03cd-4139-ab05-ed4ca01b3d49 00:28:22.040 12:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:22.300 [2024-11-15 12:50:02.500040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.300 12:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:22.559 12:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1158415 00:28:22.559 12:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:22.559 12:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:22.559 12:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1158415 /var/tmp/bdevperf.sock 00:28:22.559 12:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1158415 ']' 00:28:22.559 12:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:22.559 12:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.559 12:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:22.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:22.559 12:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.559 12:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.559 [2024-11-15 12:50:02.829024] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:28:22.559 [2024-11-15 12:50:02.829109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158415 ] 00:28:22.559 [2024-11-15 12:50:02.893518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.820 [2024-11-15 12:50:02.953555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.820 12:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.820 12:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:28:22.820 12:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:23.391 Nvme0n1 00:28:23.391 12:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:23.652 [ 00:28:23.652 { 00:28:23.652 "name": "Nvme0n1", 00:28:23.652 "aliases": [ 00:28:23.652 "7d7ea070-03cd-4139-ab05-ed4ca01b3d49" 00:28:23.652 ], 00:28:23.652 "product_name": "NVMe disk", 00:28:23.652 "block_size": 4096, 00:28:23.652 "num_blocks": 38912, 00:28:23.652 "uuid": "7d7ea070-03cd-4139-ab05-ed4ca01b3d49", 00:28:23.652 "numa_id": 0, 00:28:23.652 "assigned_rate_limits": { 00:28:23.652 "rw_ios_per_sec": 0, 00:28:23.652 "rw_mbytes_per_sec": 0, 00:28:23.652 "r_mbytes_per_sec": 0, 00:28:23.652 "w_mbytes_per_sec": 0 00:28:23.652 }, 00:28:23.652 "claimed": false, 00:28:23.652 "zoned": false, 00:28:23.652 "supported_io_types": { 00:28:23.652 "read": true, 00:28:23.652 "write": true, 00:28:23.652 "unmap": true, 00:28:23.652 "flush": true, 00:28:23.652 "reset": true, 00:28:23.652 "nvme_admin": true, 00:28:23.652 "nvme_io": true, 00:28:23.652 "nvme_io_md": false, 00:28:23.652 "write_zeroes": true, 00:28:23.652 "zcopy": false, 00:28:23.652 "get_zone_info": false, 00:28:23.652 "zone_management": false, 00:28:23.652 "zone_append": false, 00:28:23.652 "compare": true, 00:28:23.652 "compare_and_write": true, 00:28:23.652 "abort": true, 00:28:23.652 "seek_hole": false, 00:28:23.652 "seek_data": false, 00:28:23.652 "copy": true, 00:28:23.652 "nvme_iov_md": false 00:28:23.652 }, 00:28:23.652 "memory_domains": [ 00:28:23.652 { 00:28:23.652 "dma_device_id": "system", 00:28:23.652 "dma_device_type": 1 00:28:23.652 } 00:28:23.652 ], 00:28:23.652 "driver_specific": { 00:28:23.652 "nvme": [ 00:28:23.652 { 00:28:23.652 "trid": { 00:28:23.652 "trtype": "TCP", 00:28:23.652 "adrfam": "IPv4", 00:28:23.652 "traddr": "10.0.0.2", 00:28:23.652 "trsvcid": "4420", 00:28:23.652 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:23.652 }, 00:28:23.652 "ctrlr_data": { 00:28:23.652 "cntlid": 1, 00:28:23.652 "vendor_id": "0x8086", 00:28:23.652 "model_number": "SPDK bdev Controller", 00:28:23.652 "serial_number": "SPDK0", 00:28:23.652 "firmware_revision": "25.01", 00:28:23.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:23.652 "oacs": { 00:28:23.652 "security": 0, 00:28:23.652 "format": 0, 00:28:23.652 "firmware": 0, 00:28:23.652 "ns_manage": 0 00:28:23.652 }, 00:28:23.652 "multi_ctrlr": true, 00:28:23.652 "ana_reporting": false 00:28:23.652 }, 00:28:23.652 "vs": { 00:28:23.652 "nvme_version": "1.3" 00:28:23.652 }, 00:28:23.652 "ns_data": { 00:28:23.652 "id": 1, 00:28:23.652 "can_share": true 00:28:23.652 } 00:28:23.652 } 00:28:23.652 ], 00:28:23.652 "mp_policy": "active_passive" 00:28:23.652 } 00:28:23.652 } 00:28:23.652 ] 00:28:23.652 12:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1158547 00:28:23.652 12:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:23.652 12:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:23.652 Running I/O for 10 seconds... 00:28:24.587 Latency(us) 00:28:24.587 [2024-11-15T11:50:04.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:24.588 Nvme0n1 : 1.00 14796.00 57.80 0.00 0.00 0.00 0.00 0.00 00:28:24.588 [2024-11-15T11:50:04.932Z] =================================================================================================================== 00:28:24.588 [2024-11-15T11:50:04.932Z] Total : 14796.00 57.80 0.00 0.00 0.00 0.00 0.00 00:28:24.588 00:28:25.526 12:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e7d5fe8f-bd63-499a-8031-07b1ad99e38e 00:28:25.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:25.785 Nvme0n1 : 2.00 14939.50 58.36 0.00 0.00 0.00 0.00 0.00 00:28:25.785 [2024-11-15T11:50:06.129Z] =================================================================================================================== 00:28:25.785 [2024-11-15T11:50:06.129Z] Total : 14939.50 58.36 0.00 0.00 0.00 0.00 0.00 00:28:25.785 00:28:25.785 true 00:28:26.044 12:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d5fe8f-bd63-499a-8031-07b1ad99e38e 00:28:26.044 12:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:26.302 12:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:26.302 12:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:26.302 12:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1158547 00:28:26.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:26.870 Nvme0n1 : 3.00 15039.67 58.75 0.00 0.00 0.00 0.00 0.00 00:28:26.870 [2024-11-15T11:50:07.214Z] =================================================================================================================== 00:28:26.870 [2024-11-15T11:50:07.214Z] Total : 15039.67 58.75 0.00 0.00 0.00 0.00 0.00 00:28:26.870 00:28:27.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:27.809 Nvme0n1 : 4.00 15121.50 59.07 0.00 0.00 0.00 0.00 0.00 00:28:27.809 [2024-11-15T11:50:08.153Z] =================================================================================================================== 00:28:27.809 [2024-11-15T11:50:08.153Z] Total : 15121.50 59.07 0.00 0.00 0.00 0.00 0.00 00:28:27.809 00:28:28.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:28.745 Nvme0n1 : 5.00 15170.60 59.26 0.00 0.00 0.00 0.00 0.00 00:28:28.745 [2024-11-15T11:50:09.089Z] =================================================================================================================== 00:28:28.745 [2024-11-15T11:50:09.089Z] Total : 15170.60 59.26 0.00 0.00 0.00 0.00 0.00 00:28:28.745 00:28:29.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:29.683 Nvme0n1 : 6.00 15224.50 59.47 0.00 0.00 0.00 0.00 0.00 00:28:29.683 [2024-11-15T11:50:10.027Z] =================================================================================================================== 00:28:29.683 [2024-11-15T11:50:10.027Z] Total : 15224.50 59.47 0.00 0.00 0.00 0.00 0.00 00:28:29.683 00:28:30.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:30.619 Nvme0n1 : 7.00 15244.86 59.55 0.00 0.00 0.00 0.00 0.00 00:28:30.619 [2024-11-15T11:50:10.963Z] =================================================================================================================== 00:28:30.619 [2024-11-15T11:50:10.963Z] Total : 15244.86 59.55 0.00 0.00 0.00 0.00 0.00 00:28:30.619 00:28:31.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:31.998 Nvme0n1 : 8.00 15280.25 59.69 0.00 0.00 0.00 0.00 0.00 00:28:31.998 [2024-11-15T11:50:12.342Z] =================================================================================================================== 00:28:31.998 [2024-11-15T11:50:12.342Z] Total : 15280.25 59.69 0.00 0.00 0.00 0.00 0.00 00:28:31.998 00:28:32.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:32.938 Nvme0n1 : 9.00 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:28:32.938 [2024-11-15T11:50:13.282Z] =================================================================================================================== 00:28:32.938 [2024-11-15T11:50:13.282Z] Total : 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:28:32.938 00:28:33.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:33.877 Nvme0n1 : 10.00 15335.70 59.91 0.00 0.00 0.00 0.00 0.00 00:28:33.877 [2024-11-15T11:50:14.221Z] =================================================================================================================== 00:28:33.877 [2024-11-15T11:50:14.221Z] Total : 15335.70 59.91 0.00 0.00 0.00 0.00 0.00 00:28:33.877 00:28:33.877 00:28:33.877 Latency(us) 00:28:33.877 [2024-11-15T11:50:14.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:33.877 Nvme0n1 : 10.01 15333.58 59.90 0.00 0.00 8342.00 4247.70 18252.99 00:28:33.877 [2024-11-15T11:50:14.221Z] =================================================================================================================== 00:28:33.877 [2024-11-15T11:50:14.221Z] Total : 15333.58 59.90 0.00 0.00 8342.00 4247.70 18252.99 00:28:33.877 { 00:28:33.877 "results": [ 00:28:33.877 { 00:28:33.877 "job": "Nvme0n1", 00:28:33.877 "core_mask": "0x2", 00:28:33.877 "workload": "randwrite", 00:28:33.877 "status": "finished", 00:28:33.877 "queue_depth": 128, 00:28:33.877 "io_size": 4096, 00:28:33.877 "runtime": 10.00562, 00:28:33.877 "iops": 15333.582526620039, 00:28:33.877 "mibps": 59.89680674460953, 00:28:33.877 "io_failed": 0, 00:28:33.877 "io_timeout": 0, 00:28:33.877 "avg_latency_us": 8341.996876357005, 00:28:33.877 "min_latency_us": 4247.7037037037035, 00:28:33.877 "max_latency_us": 18252.98962962963 00:28:33.877 } 00:28:33.877 ], 00:28:33.877 "core_count": 1 00:28:33.877 } 00:28:33.877 12:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1158415 00:28:33.877 12:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1158415 ']' 00:28:33.877 12:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1158415 00:28:33.877 12:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:28:33.877 12:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.877 12:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1158415 00:28:33.877 12:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:33.877 12:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:33.877 12:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1158415' 00:28:33.877 killing process with pid 1158415 00:28:33.877 12:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1158415 00:28:33.877 Received shutdown signal, test time was about 10.000000 seconds 00:28:33.877 00:28:33.877 Latency(us) 00:28:33.877 [2024-11-15T11:50:14.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.877 [2024-11-15T11:50:14.221Z] =================================================================================================================== 00:28:33.877 [2024-11-15T11:50:14.221Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:33.877 12:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1158415 00:28:33.877 12:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:34.136 12:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:34.703 12:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d5fe8f-bd63-499a-8031-07b1ad99e38e 00:28:34.703 12:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:34.962 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:34.962 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:28:34.962 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:34.962 [2024-11-15 12:50:15.303814] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:35.222 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d5fe8f-bd63-499a-8031-07b1ad99e38e 00:28:35.222 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:28:35.222 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d5fe8f-bd63-499a-8031-07b1ad99e38e 00:28:35.222 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:35.222 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.222 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:35.222 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.222 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:35.222 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.222 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:35.222 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:35.222 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d5fe8f-bd63-499a-8031-07b1ad99e38e 00:28:35.483 request: 00:28:35.483 { 00:28:35.483 "uuid": "e7d5fe8f-bd63-499a-8031-07b1ad99e38e", 00:28:35.483 "method": "bdev_lvol_get_lvstores", 00:28:35.483 "req_id": 1 00:28:35.483 } 00:28:35.483 Got JSON-RPC error response 00:28:35.483 response: 00:28:35.483 { 00:28:35.483 "code": -19, 00:28:35.483 "message": "No such device" 00:28:35.483 } 00:28:35.483 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:28:35.483 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:35.483 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:35.483 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:35.483 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:35.741 aio_bdev 00:28:35.741 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7d7ea070-03cd-4139-ab05-ed4ca01b3d49 00:28:35.741 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=7d7ea070-03cd-4139-ab05-ed4ca01b3d49 00:28:35.741 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:35.741 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:28:35.741 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:35.741 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:35.741 12:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:36.000 12:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7d7ea070-03cd-4139-ab05-ed4ca01b3d49 -t 2000 00:28:36.258 [ 00:28:36.258 { 00:28:36.258 "name": "7d7ea070-03cd-4139-ab05-ed4ca01b3d49", 00:28:36.258 "aliases": [ 00:28:36.258 "lvs/lvol" 00:28:36.258 ], 00:28:36.258 "product_name": "Logical Volume", 00:28:36.258 "block_size": 4096, 00:28:36.258 "num_blocks": 38912, 00:28:36.258 "uuid": "7d7ea070-03cd-4139-ab05-ed4ca01b3d49", 00:28:36.258 "assigned_rate_limits": { 00:28:36.258 "rw_ios_per_sec": 0, 00:28:36.258 "rw_mbytes_per_sec": 0, 00:28:36.258 "r_mbytes_per_sec": 0, 00:28:36.258 "w_mbytes_per_sec": 0 00:28:36.258 }, 00:28:36.258 "claimed": false, 00:28:36.258 "zoned": false, 00:28:36.258 "supported_io_types": { 00:28:36.258 "read": true, 00:28:36.258 "write": true, 00:28:36.258 "unmap": true, 00:28:36.258 "flush": false, 00:28:36.258 "reset": true, 00:28:36.258 "nvme_admin": false, 00:28:36.258 "nvme_io": false, 00:28:36.258 "nvme_io_md": false, 00:28:36.258 "write_zeroes": true, 00:28:36.258 "zcopy": false, 00:28:36.258 "get_zone_info": false, 00:28:36.258 "zone_management": false, 00:28:36.258 "zone_append": false, 00:28:36.258 "compare": false, 00:28:36.258 "compare_and_write": false, 00:28:36.258 "abort": false, 00:28:36.258 "seek_hole": true, 00:28:36.258 "seek_data": true, 00:28:36.258 "copy": false, 00:28:36.258 "nvme_iov_md": false 00:28:36.258 }, 00:28:36.258 "driver_specific": { 00:28:36.258 "lvol": { 00:28:36.258 "lvol_store_uuid": "e7d5fe8f-bd63-499a-8031-07b1ad99e38e", 00:28:36.258 "base_bdev": "aio_bdev", 00:28:36.258 "thin_provision": false, 00:28:36.258 "num_allocated_clusters": 38, 00:28:36.258 "snapshot": false, 00:28:36.258 "clone": false, 00:28:36.258 "esnap_clone": false 00:28:36.258 } 00:28:36.258 } 00:28:36.258 } 00:28:36.258 ] 00:28:36.258 12:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:28:36.258 12:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d5fe8f-bd63-499a-8031-07b1ad99e38e 00:28:36.258 12:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:36.517 12:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:36.517 12:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7d5fe8f-bd63-499a-8031-07b1ad99e38e 00:28:36.517 12:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:36.776 12:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:36.776 12:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7d7ea070-03cd-4139-ab05-ed4ca01b3d49 00:28:37.036 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e7d5fe8f-bd63-499a-8031-07b1ad99e38e 00:28:37.297 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:37.557 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:37.557 00:28:37.557 real 0m17.870s 00:28:37.557 user 0m17.553s 00:28:37.557 sys 0m1.799s 00:28:37.557 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.557 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:37.557 ************************************ 00:28:37.557 END TEST lvs_grow_clean 00:28:37.557 ************************************ 00:28:37.557 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:28:37.557 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:37.557 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.557 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:37.817 ************************************ 00:28:37.817 START TEST lvs_grow_dirty 00:28:37.817 ************************************ 00:28:37.817 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:28:37.817 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:37.817 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:37.817 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:37.817 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:37.817 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:37.817 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:37.817 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:37.817 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:37.817 12:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:38.076 12:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:38.076 12:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:38.333 12:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3fdfc549-1791-4738-8899-d739b1cfd126 00:28:38.334 12:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdfc549-1791-4738-8899-d739b1cfd126 00:28:38.334 12:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:38.591 12:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:38.591 12:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:38.591 12:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3fdfc549-1791-4738-8899-d739b1cfd126 lvol 150 00:28:38.850 12:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c6cabbf4-0710-4357-8259-2af837c4c217 00:28:38.850 12:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:38.850 12:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:39.156 [2024-11-15 12:50:19.335756] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:39.156 [2024-11-15 12:50:19.335848] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:39.156 true 00:28:39.156 12:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdfc549-1791-4738-8899-d739b1cfd126 00:28:39.156 12:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:39.415 12:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:39.415 12:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:39.674 12:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c6cabbf4-0710-4357-8259-2af837c4c217 00:28:39.931 12:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:40.189 [2024-11-15 12:50:20.444022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.189 12:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:40.447 12:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1160568 00:28:40.447 12:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:40.447 12:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:40.447 12:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1160568 /var/tmp/bdevperf.sock 00:28:40.447 12:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1160568 ']' 00:28:40.447 12:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:40.447 12:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.447 12:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:40.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:40.447 12:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.447 12:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:40.447 [2024-11-15 12:50:20.782437] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:28:40.447 [2024-11-15 12:50:20.782545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160568 ] 00:28:40.706 [2024-11-15 12:50:20.850853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.706 [2024-11-15 12:50:20.910355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.706 12:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.706 12:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:40.706 12:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:41.276 Nvme0n1 00:28:41.276 12:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:41.535 [ 00:28:41.535 { 00:28:41.535 "name": "Nvme0n1", 00:28:41.535 "aliases": [ 00:28:41.535 "c6cabbf4-0710-4357-8259-2af837c4c217" 00:28:41.535 ], 00:28:41.535 "product_name": "NVMe disk", 00:28:41.535 "block_size": 4096, 00:28:41.535 "num_blocks": 38912, 00:28:41.535 "uuid": "c6cabbf4-0710-4357-8259-2af837c4c217", 00:28:41.535 "numa_id": 0, 00:28:41.535 "assigned_rate_limits": { 00:28:41.535 "rw_ios_per_sec": 0, 00:28:41.535 "rw_mbytes_per_sec": 0, 00:28:41.535 "r_mbytes_per_sec": 0, 00:28:41.535 "w_mbytes_per_sec": 0 00:28:41.535 }, 00:28:41.535 "claimed": false, 00:28:41.535 "zoned": false, 00:28:41.535 "supported_io_types": { 00:28:41.535 "read": true, 00:28:41.535 "write": true, 00:28:41.535 "unmap": true, 00:28:41.535 "flush": true, 00:28:41.535 "reset": true, 00:28:41.535 "nvme_admin": true, 00:28:41.535 "nvme_io": true, 00:28:41.535 "nvme_io_md": false, 00:28:41.535 "write_zeroes": true, 00:28:41.535 "zcopy": false, 00:28:41.535 "get_zone_info": false, 00:28:41.535 "zone_management": false, 00:28:41.535 "zone_append": false, 00:28:41.535 "compare": true, 00:28:41.535 "compare_and_write": true, 00:28:41.535 "abort": true, 00:28:41.535 "seek_hole": false, 00:28:41.535 "seek_data": false, 00:28:41.535 "copy": true, 00:28:41.535 "nvme_iov_md": false 00:28:41.535 }, 00:28:41.535 "memory_domains": [ 00:28:41.535 { 00:28:41.535 "dma_device_id": "system", 00:28:41.535 "dma_device_type": 1 00:28:41.535 } 00:28:41.535 ], 00:28:41.535 "driver_specific": { 00:28:41.535 "nvme": [ 00:28:41.535 { 00:28:41.535 "trid": { 00:28:41.535 "trtype": "TCP", 00:28:41.535 "adrfam": "IPv4", 00:28:41.535 "traddr": "10.0.0.2", 00:28:41.535 "trsvcid": "4420", 00:28:41.535 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:41.535 }, 00:28:41.535 "ctrlr_data": { 00:28:41.535 "cntlid": 1, 00:28:41.535 "vendor_id": "0x8086", 00:28:41.535 "model_number": "SPDK bdev Controller", 00:28:41.535 "serial_number": "SPDK0", 00:28:41.535 "firmware_revision": "25.01", 00:28:41.535 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:41.535 "oacs": { 00:28:41.535 "security": 0, 00:28:41.535 "format": 0, 00:28:41.535 "firmware": 0, 00:28:41.535 "ns_manage": 0 00:28:41.535 }, 00:28:41.535 "multi_ctrlr": true, 00:28:41.535 "ana_reporting": false 00:28:41.535 }, 00:28:41.535 "vs": { 00:28:41.535 "nvme_version": "1.3" 00:28:41.535 }, 00:28:41.535 "ns_data": { 00:28:41.535 "id": 1, 00:28:41.535 "can_share": true 00:28:41.535 } 00:28:41.535 } 00:28:41.535 ], 00:28:41.535 "mp_policy": "active_passive" 00:28:41.535 } 00:28:41.535 } 00:28:41.535 ] 00:28:41.535 12:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1160633 00:28:41.535 12:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:41.535 12:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:41.535 Running I/O for 10 seconds... 00:28:42.912 Latency(us) 00:28:42.912 [2024-11-15T11:50:23.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:42.912 Nvme0n1 : 1.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:28:42.912 [2024-11-15T11:50:23.256Z] =================================================================================================================== 00:28:42.912 [2024-11-15T11:50:23.256Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:28:42.912 00:28:43.480 12:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3fdfc549-1791-4738-8899-d739b1cfd126 00:28:43.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:43.738 Nvme0n1 : 2.00 15145.00 59.16 0.00 0.00 0.00 0.00 0.00 00:28:43.738 [2024-11-15T11:50:24.082Z] =================================================================================================================== 00:28:43.738 [2024-11-15T11:50:24.082Z] Total : 15145.00 59.16 0.00 0.00 0.00 0.00 0.00 00:28:43.738 00:28:43.997 true 00:28:43.997 12:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdfc549-1791-4738-8899-d739b1cfd126 00:28:43.998 12:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:44.255 12:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:44.255 12:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:44.255 12:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1160633 00:28:44.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:44.821 Nvme0n1 : 3.00 15197.67 59.37 0.00 0.00 0.00 0.00 0.00 00:28:44.821 [2024-11-15T11:50:25.165Z] =================================================================================================================== 00:28:44.821 [2024-11-15T11:50:25.165Z] Total : 15197.67 59.37 0.00 0.00 0.00 0.00 0.00 00:28:44.821 00:28:45.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:45.754 Nvme0n1 : 4.00 15271.75 59.66 0.00 0.00 0.00 0.00 0.00 00:28:45.754 [2024-11-15T11:50:26.098Z] =================================================================================================================== 00:28:45.754 [2024-11-15T11:50:26.098Z] Total : 15271.75 59.66 0.00 0.00 0.00 0.00 0.00 00:28:45.754 00:28:46.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:46.688 Nvme0n1 : 5.00 15329.00 59.88 0.00 0.00 0.00 0.00 0.00 00:28:46.688 [2024-11-15T11:50:27.032Z] =================================================================================================================== 00:28:46.688 [2024-11-15T11:50:27.032Z] Total : 15329.00 59.88 0.00 0.00 0.00 0.00 0.00 00:28:46.688 00:28:47.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:47.622 Nvme0n1 : 6.00 15372.67 60.05 0.00 0.00 0.00 0.00 0.00 00:28:47.622 [2024-11-15T11:50:27.966Z] =================================================================================================================== 00:28:47.622 [2024-11-15T11:50:27.966Z] Total : 15372.67 60.05 0.00 0.00 0.00 0.00 0.00 00:28:47.622 00:28:48.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:48.557 Nvme0n1 : 7.00 15408.14 60.19 0.00 0.00 0.00 0.00 0.00 00:28:48.557 [2024-11-15T11:50:28.901Z] =================================================================================================================== 00:28:48.557 [2024-11-15T11:50:28.901Z] Total : 15408.14 60.19 0.00 0.00 0.00 0.00 0.00 00:28:48.557 00:28:49.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:49.931 Nvme0n1 : 8.00 15450.62 60.35 0.00 0.00 0.00 0.00 0.00 00:28:49.931 [2024-11-15T11:50:30.275Z] =================================================================================================================== 00:28:49.931 [2024-11-15T11:50:30.275Z] Total : 15450.62 60.35 0.00 0.00 0.00 0.00 0.00 00:28:49.931 00:28:50.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:50.867 Nvme0n1 : 9.00 15448.89 60.35 0.00 0.00 0.00 0.00 0.00 00:28:50.867 [2024-11-15T11:50:31.211Z] =================================================================================================================== 00:28:50.867 [2024-11-15T11:50:31.211Z] Total : 15448.89 60.35 0.00 0.00 0.00 0.00 0.00 00:28:50.867 00:28:51.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:51.801 Nvme0n1 : 10.00 15466.10 60.41 0.00 0.00 0.00 0.00 0.00 00:28:51.801 [2024-11-15T11:50:32.145Z] =================================================================================================================== 00:28:51.801 [2024-11-15T11:50:32.145Z] Total : 15466.10 60.41 0.00 0.00 0.00 0.00 0.00 00:28:51.801 00:28:51.801 00:28:51.801 Latency(us) 00:28:51.801 [2024-11-15T11:50:32.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:51.801 Nvme0n1 : 10.01 15467.67 60.42 0.00 0.00 8270.75 3762.25 18058.81 00:28:51.801 [2024-11-15T11:50:32.145Z] =================================================================================================================== 00:28:51.801 [2024-11-15T11:50:32.145Z] Total : 15467.67 60.42 0.00 0.00 8270.75 3762.25 18058.81 00:28:51.801 { 00:28:51.801 "results": [ 00:28:51.801 { 00:28:51.801 "job": "Nvme0n1", 00:28:51.801 "core_mask": "0x2", 00:28:51.801 "workload": "randwrite", 00:28:51.801 "status": "finished", 00:28:51.801 "queue_depth": 128, 00:28:51.801 "io_size": 4096, 00:28:51.801 "runtime": 10.007262, 00:28:51.801 "iops": 15467.667379948682, 00:28:51.801 "mibps": 60.42057570292454, 00:28:51.801 "io_failed": 0, 00:28:51.801 "io_timeout": 0, 00:28:51.801 "avg_latency_us": 8270.750872592871, 00:28:51.801 "min_latency_us": 3762.251851851852, 00:28:51.801 "max_latency_us": 18058.80888888889 00:28:51.801 } 00:28:51.801 ], 00:28:51.801 "core_count": 1 00:28:51.801 } 00:28:51.801 12:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1160568 00:28:51.801 12:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1160568 ']' 00:28:51.801 12:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1160568 00:28:51.801 12:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:28:51.801 12:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:51.801 12:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1160568 00:28:51.801 12:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:51.801 12:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:51.801 12:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1160568' 00:28:51.801 killing process with pid 1160568 00:28:51.801 12:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1160568 00:28:51.801 Received shutdown signal, test time was about 10.000000 seconds 00:28:51.801 00:28:51.801 Latency(us) 00:28:51.801 [2024-11-15T11:50:32.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.801 [2024-11-15T11:50:32.145Z] =================================================================================================================== 00:28:51.801 [2024-11-15T11:50:32.145Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:51.801 12:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1160568 00:28:52.059 12:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:52.317 12:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:52.575 12:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdfc549-1791-4738-8899-d739b1cfd126 00:28:52.575 12:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:52.833 12:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:52.833 12:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:28:52.833 12:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1157974 00:28:52.833 12:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1157974 00:28:52.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1157974 Killed "${NVMF_APP[@]}" "$@" 00:28:52.833 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:28:52.833 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:28:52.833 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:52.833 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:52.833 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:52.833 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1161910 00:28:52.833 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:52.833 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1161910 00:28:52.833 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1161910 ']' 00:28:52.833 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.833 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.833 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.833 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.833 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:52.833 [2024-11-15 12:50:33.090900] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:52.833 [2024-11-15 12:50:33.092022] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:28:52.833 [2024-11-15 12:50:33.092074] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.833 [2024-11-15 12:50:33.166078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.091 [2024-11-15 12:50:33.223970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.091 [2024-11-15 12:50:33.224049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.091 [2024-11-15 12:50:33.224063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.091 [2024-11-15 12:50:33.224074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.091 [2024-11-15 12:50:33.224083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.091 [2024-11-15 12:50:33.224625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.091 [2024-11-15 12:50:33.311295] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:53.091 [2024-11-15 12:50:33.311570] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:53.091 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.091 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:53.091 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:53.091 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:53.091 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:53.091 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.091 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:53.350 [2024-11-15 12:50:33.615382] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:53.350 [2024-11-15 12:50:33.615498] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:53.350 [2024-11-15 12:50:33.615542] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:53.350 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:28:53.350 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c6cabbf4-0710-4357-8259-2af837c4c217 00:28:53.350 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c6cabbf4-0710-4357-8259-2af837c4c217 00:28:53.350 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:53.350 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:53.350 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:53.350 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:53.350 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:53.608 12:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c6cabbf4-0710-4357-8259-2af837c4c217 -t 2000 00:28:53.868 [ 00:28:53.868 { 00:28:53.868 "name": "c6cabbf4-0710-4357-8259-2af837c4c217", 00:28:53.868 "aliases": [ 00:28:53.868 "lvs/lvol" 00:28:53.868 ], 00:28:53.868 "product_name": "Logical Volume", 00:28:53.868 "block_size": 4096, 00:28:53.868 "num_blocks": 38912, 00:28:53.868 "uuid": "c6cabbf4-0710-4357-8259-2af837c4c217", 00:28:53.868 "assigned_rate_limits": { 00:28:53.868 "rw_ios_per_sec": 0, 00:28:53.868 "rw_mbytes_per_sec": 0, 00:28:53.868 "r_mbytes_per_sec": 0, 00:28:53.868 "w_mbytes_per_sec": 0 00:28:53.868 }, 00:28:53.868 "claimed": false, 00:28:53.868 "zoned": false, 00:28:53.868 "supported_io_types": { 00:28:53.868 "read": true, 00:28:53.868 "write": true, 00:28:53.868 "unmap": true, 00:28:53.868 "flush": false, 00:28:53.868 "reset": true, 00:28:53.868 "nvme_admin": false, 00:28:53.868 "nvme_io": false, 00:28:53.868 "nvme_io_md": false, 00:28:53.868 "write_zeroes": true, 00:28:53.868 "zcopy": false, 00:28:53.868 "get_zone_info": false, 00:28:53.868 "zone_management": false, 00:28:53.868 "zone_append": false, 00:28:53.868 "compare": false, 00:28:53.868 "compare_and_write": false, 00:28:53.868 "abort": false, 00:28:53.868 "seek_hole": true, 00:28:53.868 "seek_data": true, 00:28:53.868 "copy": false, 00:28:53.868 "nvme_iov_md": false 00:28:53.868 }, 00:28:53.868 "driver_specific": { 00:28:53.868 "lvol": { 00:28:53.868 "lvol_store_uuid": "3fdfc549-1791-4738-8899-d739b1cfd126", 00:28:53.868 "base_bdev": "aio_bdev", 00:28:53.868 "thin_provision": false, 00:28:53.868 "num_allocated_clusters": 38, 00:28:53.868 "snapshot": false, 00:28:53.868 "clone": false, 00:28:53.868 "esnap_clone": false 00:28:53.868 } 00:28:53.868 } 00:28:53.868 } 00:28:53.868 ] 00:28:53.868 12:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:53.868 12:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdfc549-1791-4738-8899-d739b1cfd126 00:28:53.868 12:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:28:54.161 12:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:28:54.161 12:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdfc549-1791-4738-8899-d739b1cfd126 00:28:54.161 12:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:28:54.451 12:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:28:54.451 12:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:54.734 [2024-11-15 12:50:35.037161] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:54.992 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdfc549-1791-4738-8899-d739b1cfd126 00:28:54.992 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:28:54.992 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdfc549-1791-4738-8899-d739b1cfd126 00:28:54.992 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:54.992 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:54.992 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:54.992 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:54.992 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:54.992 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:54.992 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:54.993 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:54.993 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdfc549-1791-4738-8899-d739b1cfd126 00:28:55.251 request: 00:28:55.251 { 00:28:55.251 "uuid": "3fdfc549-1791-4738-8899-d739b1cfd126", 00:28:55.251 "method": "bdev_lvol_get_lvstores", 00:28:55.251 "req_id": 1 00:28:55.251 } 00:28:55.251 Got JSON-RPC error response 00:28:55.251 response: 00:28:55.251 { 00:28:55.251 "code": -19, 00:28:55.251 "message": "No such device" 00:28:55.251 } 00:28:55.251 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:28:55.251 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:55.251 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:55.251 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:55.251 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:55.510 aio_bdev 00:28:55.510 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c6cabbf4-0710-4357-8259-2af837c4c217 00:28:55.510 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c6cabbf4-0710-4357-8259-2af837c4c217 00:28:55.510 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:55.510 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:55.510 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:55.510 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:55.510 12:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:55.769 12:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c6cabbf4-0710-4357-8259-2af837c4c217 -t 2000 00:28:56.027 [ 00:28:56.027 { 00:28:56.027 "name": "c6cabbf4-0710-4357-8259-2af837c4c217", 00:28:56.027 "aliases": [ 00:28:56.027 "lvs/lvol" 00:28:56.027 ], 00:28:56.027 "product_name": "Logical Volume", 00:28:56.027 "block_size": 4096, 00:28:56.027 "num_blocks": 38912, 00:28:56.027 "uuid": "c6cabbf4-0710-4357-8259-2af837c4c217", 00:28:56.027 "assigned_rate_limits": { 00:28:56.027 "rw_ios_per_sec": 0, 00:28:56.027 "rw_mbytes_per_sec": 0, 00:28:56.027 "r_mbytes_per_sec": 0, 00:28:56.027 "w_mbytes_per_sec": 0 00:28:56.027 }, 00:28:56.027 "claimed": false, 00:28:56.027 "zoned": false, 00:28:56.027 "supported_io_types": { 00:28:56.027 "read": true, 00:28:56.027 "write": true, 00:28:56.027 "unmap": true, 00:28:56.027 "flush": false, 00:28:56.027 "reset": true, 00:28:56.027 "nvme_admin": false, 00:28:56.027 "nvme_io": false, 00:28:56.027 "nvme_io_md": false, 00:28:56.027 "write_zeroes": true, 00:28:56.027 "zcopy": false, 00:28:56.027 "get_zone_info": false, 00:28:56.027 "zone_management": false, 00:28:56.027 "zone_append": false, 00:28:56.027 "compare": false, 00:28:56.027 "compare_and_write": false, 00:28:56.027 "abort": false, 00:28:56.027 "seek_hole": true, 00:28:56.027 "seek_data": true, 00:28:56.027 "copy": false, 00:28:56.027 "nvme_iov_md": false 00:28:56.027 }, 00:28:56.027 "driver_specific": { 00:28:56.027 "lvol": { 00:28:56.027 "lvol_store_uuid": "3fdfc549-1791-4738-8899-d739b1cfd126", 00:28:56.027 "base_bdev": "aio_bdev", 00:28:56.027 "thin_provision": false, 00:28:56.027 "num_allocated_clusters": 38, 00:28:56.027 "snapshot": false, 00:28:56.027 "clone": false, 00:28:56.028 "esnap_clone": false 00:28:56.028 } 00:28:56.028 } 00:28:56.028 } 00:28:56.028 ] 00:28:56.286 12:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:56.286 12:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdfc549-1791-4738-8899-d739b1cfd126 00:28:56.286 12:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:56.545 12:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:56.545 12:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdfc549-1791-4738-8899-d739b1cfd126 00:28:56.545 12:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:56.803 12:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:56.803 12:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c6cabbf4-0710-4357-8259-2af837c4c217 00:28:57.061 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3fdfc549-1791-4738-8899-d739b1cfd126 00:28:57.320 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:57.580 00:28:57.580 real 0m19.874s 00:28:57.580 user 0m36.873s 00:28:57.580 sys 0m4.730s 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:57.580 ************************************ 00:28:57.580 END TEST lvs_grow_dirty 00:28:57.580 ************************************ 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:57.580 nvmf_trace.0 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.580 rmmod nvme_tcp 00:28:57.580 rmmod nvme_fabrics 00:28:57.580 rmmod nvme_keyring 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1161910 ']' 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1161910 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1161910 ']' 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1161910 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.580 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1161910 00:28:57.839 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:57.839 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:57.839 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1161910' 00:28:57.839 killing process with pid 1161910 00:28:57.839 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1161910 00:28:57.839 12:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1161910 00:28:58.099 12:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:58.099 12:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:58.099 12:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:58.099 12:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:28:58.099 12:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:28:58.099 12:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:58.099 12:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:28:58.099 12:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.099 12:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:58.099 12:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.099 12:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.099 12:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.004 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:00.004 00:29:00.004 real 0m43.307s 00:29:00.004 user 0m56.258s 00:29:00.004 sys 0m8.540s 00:29:00.004 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.004 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:00.004 ************************************ 00:29:00.004 END TEST nvmf_lvs_grow 00:29:00.004 ************************************ 00:29:00.004 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:00.004 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:00.004 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.004 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:00.004 ************************************ 00:29:00.004 START TEST nvmf_bdev_io_wait 00:29:00.004 ************************************ 00:29:00.004 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:00.263 * Looking for test storage... 00:29:00.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:00.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.263 --rc genhtml_branch_coverage=1 00:29:00.263 --rc genhtml_function_coverage=1 00:29:00.263 --rc genhtml_legend=1 00:29:00.263 --rc geninfo_all_blocks=1 00:29:00.263 --rc geninfo_unexecuted_blocks=1 00:29:00.263 00:29:00.263 ' 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:00.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.263 --rc genhtml_branch_coverage=1 00:29:00.263 --rc genhtml_function_coverage=1 00:29:00.263 --rc genhtml_legend=1 00:29:00.263 --rc geninfo_all_blocks=1 00:29:00.263 --rc geninfo_unexecuted_blocks=1 00:29:00.263 00:29:00.263 ' 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:00.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.263 --rc genhtml_branch_coverage=1 00:29:00.263 --rc genhtml_function_coverage=1 00:29:00.263 --rc genhtml_legend=1 00:29:00.263 --rc geninfo_all_blocks=1 00:29:00.263 --rc geninfo_unexecuted_blocks=1 00:29:00.263 00:29:00.263 ' 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:00.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.263 --rc genhtml_branch_coverage=1 00:29:00.263 --rc genhtml_function_coverage=1 00:29:00.263 --rc genhtml_legend=1 00:29:00.263 --rc geninfo_all_blocks=1 00:29:00.263 --rc geninfo_unexecuted_blocks=1 00:29:00.263 00:29:00.263 ' 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.263 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.264 12:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:02.168 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:02.168 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:02.168 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:02.426 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:02.426 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.426 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:02.426 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.426 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:02.426 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:02.426 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.426 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:02.426 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:02.426 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.426 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:02.426 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.426 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:02.426 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:02.427 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:02.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:29:02.427 00:29:02.427 --- 10.0.0.2 ping statistics --- 00:29:02.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.427 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:02.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:02.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:29:02.427 00:29:02.427 --- 10.0.0.1 ping statistics --- 00:29:02.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.427 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1164563 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1164563 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1164563 ']' 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.427 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:02.427 [2024-11-15 12:50:42.735913] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:02.427 [2024-11-15 12:50:42.737238] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:29:02.427 [2024-11-15 12:50:42.737299] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.686 [2024-11-15 12:50:42.817387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:02.686 [2024-11-15 12:50:42.884211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.686 [2024-11-15 12:50:42.884267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.686 [2024-11-15 12:50:42.884284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.686 [2024-11-15 12:50:42.884296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.686 [2024-11-15 12:50:42.884306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.686 [2024-11-15 12:50:42.885870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.686 [2024-11-15 12:50:42.885934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.686 [2024-11-15 12:50:42.886002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.686 [2024-11-15 12:50:42.885999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:02.686 [2024-11-15 12:50:42.886490] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:02.686 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.686 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:29:02.686 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:02.686 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:02.686 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:02.686 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.686 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:02.686 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.686 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:02.686 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.686 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:02.686 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.686 12:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:02.945 [2024-11-15 12:50:43.052861] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:02.945 [2024-11-15 12:50:43.053065] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:02.945 [2024-11-15 12:50:43.054044] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:02.945 [2024-11-15 12:50:43.054860] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:02.945 [2024-11-15 12:50:43.062670] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:02.945 Malloc0 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:02.945 [2024-11-15 12:50:43.114881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1164585 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1164587 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1164589 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:02.945 { 00:29:02.945 "params": { 00:29:02.945 "name": "Nvme$subsystem", 00:29:02.945 "trtype": "$TEST_TRANSPORT", 00:29:02.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:02.945 "adrfam": "ipv4", 00:29:02.945 "trsvcid": "$NVMF_PORT", 00:29:02.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:02.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:02.945 "hdgst": ${hdgst:-false}, 00:29:02.945 "ddgst": ${ddgst:-false} 00:29:02.945 }, 00:29:02.945 "method": "bdev_nvme_attach_controller" 00:29:02.945 } 00:29:02.945 EOF 00:29:02.945 )") 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1164591 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:02.945 { 00:29:02.945 "params": { 00:29:02.945 "name": "Nvme$subsystem", 00:29:02.945 "trtype": "$TEST_TRANSPORT", 00:29:02.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:02.945 "adrfam": "ipv4", 00:29:02.945 "trsvcid": "$NVMF_PORT", 00:29:02.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:02.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:02.945 "hdgst": ${hdgst:-false}, 00:29:02.945 "ddgst": ${ddgst:-false} 00:29:02.945 }, 00:29:02.945 "method": "bdev_nvme_attach_controller" 00:29:02.945 } 00:29:02.945 EOF 00:29:02.945 )") 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:02.945 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:02.945 { 00:29:02.945 "params": { 00:29:02.945 "name": "Nvme$subsystem", 00:29:02.945 "trtype": "$TEST_TRANSPORT", 00:29:02.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:02.946 "adrfam": "ipv4", 00:29:02.946 "trsvcid": "$NVMF_PORT", 00:29:02.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:02.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:02.946 "hdgst": ${hdgst:-false}, 00:29:02.946 "ddgst": ${ddgst:-false} 00:29:02.946 }, 00:29:02.946 "method": "bdev_nvme_attach_controller" 00:29:02.946 } 00:29:02.946 EOF 00:29:02.946 )") 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:02.946 { 00:29:02.946 "params": { 00:29:02.946 "name": "Nvme$subsystem", 00:29:02.946 "trtype": "$TEST_TRANSPORT", 00:29:02.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:02.946 "adrfam": "ipv4", 00:29:02.946 "trsvcid": "$NVMF_PORT", 00:29:02.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:02.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:02.946 "hdgst": ${hdgst:-false}, 00:29:02.946 "ddgst": ${ddgst:-false} 00:29:02.946 }, 00:29:02.946 "method": "bdev_nvme_attach_controller" 00:29:02.946 } 00:29:02.946 EOF 00:29:02.946 )") 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1164585 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:02.946 "params": { 00:29:02.946 "name": "Nvme1", 00:29:02.946 "trtype": "tcp", 00:29:02.946 "traddr": "10.0.0.2", 00:29:02.946 "adrfam": "ipv4", 00:29:02.946 "trsvcid": "4420", 00:29:02.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:02.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:02.946 "hdgst": false, 00:29:02.946 "ddgst": false 00:29:02.946 }, 00:29:02.946 "method": "bdev_nvme_attach_controller" 00:29:02.946 }' 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:02.946 "params": { 00:29:02.946 "name": "Nvme1", 00:29:02.946 "trtype": "tcp", 00:29:02.946 "traddr": "10.0.0.2", 00:29:02.946 "adrfam": "ipv4", 00:29:02.946 "trsvcid": "4420", 00:29:02.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:02.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:02.946 "hdgst": false, 00:29:02.946 "ddgst": false 00:29:02.946 }, 00:29:02.946 "method": "bdev_nvme_attach_controller" 00:29:02.946 }' 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:02.946 "params": { 00:29:02.946 "name": "Nvme1", 00:29:02.946 "trtype": "tcp", 00:29:02.946 "traddr": "10.0.0.2", 00:29:02.946 "adrfam": "ipv4", 00:29:02.946 "trsvcid": "4420", 00:29:02.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:02.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:02.946 "hdgst": false, 00:29:02.946 "ddgst": false 00:29:02.946 }, 00:29:02.946 "method": "bdev_nvme_attach_controller" 00:29:02.946 }' 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:02.946 12:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:02.946 "params": { 00:29:02.946 "name": "Nvme1", 00:29:02.946 "trtype": "tcp", 00:29:02.946 "traddr": "10.0.0.2", 00:29:02.946 "adrfam": "ipv4", 00:29:02.946 "trsvcid": "4420", 00:29:02.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:02.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:02.946 "hdgst": false, 00:29:02.946 "ddgst": false 00:29:02.946 }, 00:29:02.946 "method": "bdev_nvme_attach_controller" 00:29:02.946 }' 00:29:02.946 [2024-11-15 12:50:43.166989] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:29:02.946 [2024-11-15 12:50:43.166996] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:29:02.946 [2024-11-15 12:50:43.166996] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:29:02.946 [2024-11-15 12:50:43.166997] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:29:02.946 [2024-11-15 12:50:43.167085] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:02.946 [2024-11-15 12:50:43.167105] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-15 12:50:43.167104] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-15 12:50:43.167106] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:02.946 --proc-type=auto ] 00:29:02.946 --proc-type=auto ] 00:29:03.204 [2024-11-15 12:50:43.350840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.204 [2024-11-15 12:50:43.404132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:03.204 [2024-11-15 12:50:43.449457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.204 [2024-11-15 12:50:43.505535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:03.462 [2024-11-15 12:50:43.553167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.462 [2024-11-15 12:50:43.607227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:03.462 [2024-11-15 12:50:43.623704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.462 [2024-11-15 12:50:43.675891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:03.462 Running I/O for 1 seconds... 00:29:03.462 Running I/O for 1 seconds... 00:29:03.719 Running I/O for 1 seconds... 00:29:03.719 Running I/O for 1 seconds... 00:29:04.653 192544.00 IOPS, 752.12 MiB/s 00:29:04.653 Latency(us) 00:29:04.653 [2024-11-15T11:50:44.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.653 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:04.653 Nvme1n1 : 1.00 192182.05 750.71 0.00 0.00 662.38 280.65 1881.13 00:29:04.653 [2024-11-15T11:50:44.997Z] =================================================================================================================== 00:29:04.653 [2024-11-15T11:50:44.997Z] Total : 192182.05 750.71 0.00 0.00 662.38 280.65 1881.13 00:29:04.653 6148.00 IOPS, 24.02 MiB/s 00:29:04.653 Latency(us) 00:29:04.653 [2024-11-15T11:50:44.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.653 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:04.653 Nvme1n1 : 1.02 6134.59 23.96 0.00 0.00 20596.38 1966.08 29903.83 00:29:04.653 [2024-11-15T11:50:44.997Z] =================================================================================================================== 00:29:04.653 [2024-11-15T11:50:44.997Z] Total : 6134.59 23.96 0.00 0.00 20596.38 1966.08 29903.83 00:29:04.653 9470.00 IOPS, 36.99 MiB/s [2024-11-15T11:50:44.997Z] 6047.00 IOPS, 23.62 MiB/s 00:29:04.653 Latency(us) 00:29:04.653 [2024-11-15T11:50:44.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.653 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:04.653 Nvme1n1 : 1.01 6178.61 24.14 0.00 0.00 20662.21 3835.07 40001.23 00:29:04.653 [2024-11-15T11:50:44.997Z] =================================================================================================================== 00:29:04.653 [2024-11-15T11:50:44.997Z] Total : 6178.61 24.14 0.00 0.00 20662.21 3835.07 40001.23 00:29:04.653 00:29:04.653 Latency(us) 00:29:04.653 [2024-11-15T11:50:44.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.653 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:04.653 Nvme1n1 : 1.05 9149.14 35.74 0.00 0.00 13401.64 4975.88 47962.64 00:29:04.653 [2024-11-15T11:50:44.997Z] =================================================================================================================== 00:29:04.653 [2024-11-15T11:50:44.997Z] Total : 9149.14 35.74 0.00 0.00 13401.64 4975.88 47962.64 00:29:04.653 12:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1164587 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1164589 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1164591 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:04.912 rmmod nvme_tcp 00:29:04.912 rmmod nvme_fabrics 00:29:04.912 rmmod nvme_keyring 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1164563 ']' 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1164563 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1164563 ']' 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1164563 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1164563 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1164563' 00:29:04.912 killing process with pid 1164563 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1164563 00:29:04.912 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1164563 00:29:05.173 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:05.173 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:05.173 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:05.173 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:05.173 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:29:05.173 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:29:05.173 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:05.173 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.173 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.173 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.173 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.173 12:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.092 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.092 00:29:07.092 real 0m7.079s 00:29:07.092 user 0m13.994s 00:29:07.092 sys 0m3.859s 00:29:07.092 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.092 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:07.092 ************************************ 00:29:07.092 END TEST nvmf_bdev_io_wait 00:29:07.092 ************************************ 00:29:07.092 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:07.092 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:07.092 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.092 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:07.352 ************************************ 00:29:07.352 START TEST nvmf_queue_depth 00:29:07.352 ************************************ 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:07.352 * Looking for test storage... 00:29:07.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:07.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.352 --rc genhtml_branch_coverage=1 00:29:07.352 --rc genhtml_function_coverage=1 00:29:07.352 --rc genhtml_legend=1 00:29:07.352 --rc geninfo_all_blocks=1 00:29:07.352 --rc geninfo_unexecuted_blocks=1 00:29:07.352 00:29:07.352 ' 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:07.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.352 --rc genhtml_branch_coverage=1 00:29:07.352 --rc genhtml_function_coverage=1 00:29:07.352 --rc genhtml_legend=1 00:29:07.352 --rc geninfo_all_blocks=1 00:29:07.352 --rc geninfo_unexecuted_blocks=1 00:29:07.352 00:29:07.352 ' 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:07.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.352 --rc genhtml_branch_coverage=1 00:29:07.352 --rc genhtml_function_coverage=1 00:29:07.352 --rc genhtml_legend=1 00:29:07.352 --rc geninfo_all_blocks=1 00:29:07.352 --rc geninfo_unexecuted_blocks=1 00:29:07.352 00:29:07.352 ' 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:07.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.352 --rc genhtml_branch_coverage=1 00:29:07.352 --rc genhtml_function_coverage=1 00:29:07.352 --rc genhtml_legend=1 00:29:07.352 --rc geninfo_all_blocks=1 00:29:07.352 --rc geninfo_unexecuted_blocks=1 00:29:07.352 00:29:07.352 ' 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.352 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.353 12:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:09.886 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:09.886 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.886 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:09.887 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:09.887 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:09.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:29:09.887 00:29:09.887 --- 10.0.0.2 ping statistics --- 00:29:09.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.887 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:29:09.887 00:29:09.887 --- 10.0.0.1 ping statistics --- 00:29:09.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.887 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:09.887 12:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:09.887 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:09.887 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:09.887 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.887 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:09.887 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1166810 00:29:09.887 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:09.887 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1166810 00:29:09.887 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1166810 ']' 00:29:09.887 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.887 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.887 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.887 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.887 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:09.887 [2024-11-15 12:50:50.077373] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:09.887 [2024-11-15 12:50:50.078714] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:29:09.887 [2024-11-15 12:50:50.078808] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.887 [2024-11-15 12:50:50.167932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.146 [2024-11-15 12:50:50.229515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.146 [2024-11-15 12:50:50.229570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.146 [2024-11-15 12:50:50.229599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.146 [2024-11-15 12:50:50.229618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.146 [2024-11-15 12:50:50.229628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.146 [2024-11-15 12:50:50.230325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.146 [2024-11-15 12:50:50.326849] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:10.146 [2024-11-15 12:50:50.327157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:10.146 [2024-11-15 12:50:50.378909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:10.146 Malloc0 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:10.146 [2024-11-15 12:50:50.443033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1166954 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1166954 /var/tmp/bdevperf.sock 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1166954 ']' 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:10.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.146 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:10.405 [2024-11-15 12:50:50.495529] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:29:10.405 [2024-11-15 12:50:50.495617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166954 ] 00:29:10.405 [2024-11-15 12:50:50.560859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.405 [2024-11-15 12:50:50.617860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.405 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.405 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:10.405 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:10.405 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.405 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:10.663 NVMe0n1 00:29:10.663 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.663 12:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:10.921 Running I/O for 10 seconds... 00:29:12.791 8201.00 IOPS, 32.04 MiB/s [2024-11-15T11:50:54.070Z] 8652.00 IOPS, 33.80 MiB/s [2024-11-15T11:50:55.443Z] 8546.67 IOPS, 33.39 MiB/s [2024-11-15T11:50:56.379Z] 8676.25 IOPS, 33.89 MiB/s [2024-11-15T11:50:57.314Z] 8608.20 IOPS, 33.63 MiB/s [2024-11-15T11:50:58.249Z] 8699.83 IOPS, 33.98 MiB/s [2024-11-15T11:50:59.183Z] 8662.43 IOPS, 33.84 MiB/s [2024-11-15T11:51:00.118Z] 8699.75 IOPS, 33.98 MiB/s [2024-11-15T11:51:01.053Z] 8672.11 IOPS, 33.88 MiB/s [2024-11-15T11:51:01.311Z] 8698.90 IOPS, 33.98 MiB/s 00:29:20.967 Latency(us) 00:29:20.967 [2024-11-15T11:51:01.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.967 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:29:20.967 Verification LBA range: start 0x0 length 0x4000 00:29:20.967 NVMe0n1 : 10.10 8711.53 34.03 0.00 0.00 117059.24 20971.52 69516.71 00:29:20.967 [2024-11-15T11:51:01.311Z] =================================================================================================================== 00:29:20.967 [2024-11-15T11:51:01.311Z] Total : 8711.53 34.03 0.00 0.00 117059.24 20971.52 69516.71 00:29:20.967 { 00:29:20.967 "results": [ 00:29:20.967 { 00:29:20.967 "job": "NVMe0n1", 00:29:20.967 "core_mask": "0x1", 00:29:20.967 "workload": "verify", 00:29:20.967 "status": "finished", 00:29:20.967 "verify_range": { 00:29:20.967 "start": 0, 00:29:20.967 "length": 16384 00:29:20.967 }, 00:29:20.967 "queue_depth": 1024, 00:29:20.967 "io_size": 4096, 00:29:20.967 "runtime": 10.100296, 00:29:20.967 "iops": 8711.52687010361, 00:29:20.967 "mibps": 34.029401836342224, 00:29:20.967 "io_failed": 0, 00:29:20.967 "io_timeout": 0, 00:29:20.967 "avg_latency_us": 117059.24297399128, 00:29:20.967 "min_latency_us": 20971.52, 00:29:20.967 "max_latency_us": 69516.70518518519 00:29:20.967 } 00:29:20.967 ], 00:29:20.967 "core_count": 1 00:29:20.968 } 00:29:20.968 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1166954 00:29:20.968 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1166954 ']' 00:29:20.968 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1166954 00:29:20.968 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:20.968 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:20.968 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1166954 00:29:20.968 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:20.968 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:20.968 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1166954' 00:29:20.968 killing process with pid 1166954 00:29:20.968 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1166954 00:29:20.968 Received shutdown signal, test time was about 10.000000 seconds 00:29:20.968 00:29:20.968 Latency(us) 00:29:20.968 [2024-11-15T11:51:01.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.968 [2024-11-15T11:51:01.312Z] =================================================================================================================== 00:29:20.968 [2024-11-15T11:51:01.312Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:20.968 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1166954 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.226 rmmod nvme_tcp 00:29:21.226 rmmod nvme_fabrics 00:29:21.226 rmmod nvme_keyring 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1166810 ']' 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1166810 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1166810 ']' 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1166810 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.226 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1166810 00:29:21.227 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:21.227 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:21.227 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1166810' 00:29:21.227 killing process with pid 1166810 00:29:21.227 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1166810 00:29:21.227 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1166810 00:29:21.485 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.485 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.485 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.485 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:29:21.485 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:29:21.485 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:21.485 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:29:21.485 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.485 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.485 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.485 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.485 12:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:24.022 00:29:24.022 real 0m16.364s 00:29:24.022 user 0m22.466s 00:29:24.022 sys 0m3.416s 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:24.022 ************************************ 00:29:24.022 END TEST nvmf_queue_depth 00:29:24.022 ************************************ 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:24.022 ************************************ 00:29:24.022 START TEST nvmf_target_multipath 00:29:24.022 ************************************ 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:24.022 * Looking for test storage... 00:29:24.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:24.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.022 --rc genhtml_branch_coverage=1 00:29:24.022 --rc genhtml_function_coverage=1 00:29:24.022 --rc genhtml_legend=1 00:29:24.022 --rc geninfo_all_blocks=1 00:29:24.022 --rc geninfo_unexecuted_blocks=1 00:29:24.022 00:29:24.022 ' 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:24.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.022 --rc genhtml_branch_coverage=1 00:29:24.022 --rc genhtml_function_coverage=1 00:29:24.022 --rc genhtml_legend=1 00:29:24.022 --rc geninfo_all_blocks=1 00:29:24.022 --rc geninfo_unexecuted_blocks=1 00:29:24.022 00:29:24.022 ' 00:29:24.022 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:24.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.022 --rc genhtml_branch_coverage=1 00:29:24.022 --rc genhtml_function_coverage=1 00:29:24.022 --rc genhtml_legend=1 00:29:24.023 --rc geninfo_all_blocks=1 00:29:24.023 --rc geninfo_unexecuted_blocks=1 00:29:24.023 00:29:24.023 ' 00:29:24.023 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:24.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.023 --rc genhtml_branch_coverage=1 00:29:24.023 --rc genhtml_function_coverage=1 00:29:24.023 --rc genhtml_legend=1 00:29:24.023 --rc geninfo_all_blocks=1 00:29:24.023 --rc geninfo_unexecuted_blocks=1 00:29:24.023 00:29:24.023 ' 00:29:24.023 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.023 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:24.023 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.023 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.023 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.023 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.023 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.023 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.023 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.023 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.023 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.023 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.023 12:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:25.928 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:25.928 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.928 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:25.929 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:25.929 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.929 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:26.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:29:26.189 00:29:26.189 --- 10.0.0.2 ping statistics --- 00:29:26.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.189 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:29:26.189 00:29:26.189 --- 10.0.0.1 ping statistics --- 00:29:26.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.189 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:26.189 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:29:26.190 only one NIC for nvmf test 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.190 rmmod nvme_tcp 00:29:26.190 rmmod nvme_fabrics 00:29:26.190 rmmod nvme_keyring 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.190 12:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:28.727 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.728 00:29:28.728 real 0m4.645s 00:29:28.728 user 0m0.953s 00:29:28.728 sys 0m1.704s 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:28.728 ************************************ 00:29:28.728 END TEST nvmf_target_multipath 00:29:28.728 ************************************ 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:28.728 ************************************ 00:29:28.728 START TEST nvmf_zcopy 00:29:28.728 ************************************ 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:28.728 * Looking for test storage... 00:29:28.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:28.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.728 --rc genhtml_branch_coverage=1 00:29:28.728 --rc genhtml_function_coverage=1 00:29:28.728 --rc genhtml_legend=1 00:29:28.728 --rc geninfo_all_blocks=1 00:29:28.728 --rc geninfo_unexecuted_blocks=1 00:29:28.728 00:29:28.728 ' 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:28.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.728 --rc genhtml_branch_coverage=1 00:29:28.728 --rc genhtml_function_coverage=1 00:29:28.728 --rc genhtml_legend=1 00:29:28.728 --rc geninfo_all_blocks=1 00:29:28.728 --rc geninfo_unexecuted_blocks=1 00:29:28.728 00:29:28.728 ' 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:28.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.728 --rc genhtml_branch_coverage=1 00:29:28.728 --rc genhtml_function_coverage=1 00:29:28.728 --rc genhtml_legend=1 00:29:28.728 --rc geninfo_all_blocks=1 00:29:28.728 --rc geninfo_unexecuted_blocks=1 00:29:28.728 00:29:28.728 ' 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:28.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.728 --rc genhtml_branch_coverage=1 00:29:28.728 --rc genhtml_function_coverage=1 00:29:28.728 --rc genhtml_legend=1 00:29:28.728 --rc geninfo_all_blocks=1 00:29:28.728 --rc geninfo_unexecuted_blocks=1 00:29:28.728 00:29:28.728 ' 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.728 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:29:28.729 12:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:30.633 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.633 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:29:30.633 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:30.633 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:30.633 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:30.634 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:30.634 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:30.634 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:30.634 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:30.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:29:30.634 00:29:30.634 --- 10.0.0.2 ping statistics --- 00:29:30.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.634 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:29:30.634 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:29:30.634 00:29:30.634 --- 10.0.0.1 ping statistics --- 00:29:30.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.635 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:29:30.635 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.635 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:29:30.635 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:30.635 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.635 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:30.635 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:30.635 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.635 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:30.635 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:30.893 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:29:30.893 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:30.893 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.893 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:30.893 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1172730 00:29:30.893 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1172730 00:29:30.893 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:30.893 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1172730 ']' 00:29:30.893 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.893 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.893 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.893 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.893 12:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:30.893 [2024-11-15 12:51:11.042604] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:30.893 [2024-11-15 12:51:11.043628] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:29:30.893 [2024-11-15 12:51:11.043677] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.893 [2024-11-15 12:51:11.111072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.893 [2024-11-15 12:51:11.163168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.893 [2024-11-15 12:51:11.163224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.893 [2024-11-15 12:51:11.163248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.893 [2024-11-15 12:51:11.163258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.893 [2024-11-15 12:51:11.163268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.893 [2024-11-15 12:51:11.163799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.152 [2024-11-15 12:51:11.245716] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:31.152 [2024-11-15 12:51:11.246016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:31.152 [2024-11-15 12:51:11.300369] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:31.152 [2024-11-15 12:51:11.316481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:31.152 malloc0 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:31.152 { 00:29:31.152 "params": { 00:29:31.152 "name": "Nvme$subsystem", 00:29:31.152 "trtype": "$TEST_TRANSPORT", 00:29:31.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:31.152 "adrfam": "ipv4", 00:29:31.152 "trsvcid": "$NVMF_PORT", 00:29:31.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:31.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:31.152 "hdgst": ${hdgst:-false}, 00:29:31.152 "ddgst": ${ddgst:-false} 00:29:31.152 }, 00:29:31.152 "method": "bdev_nvme_attach_controller" 00:29:31.152 } 00:29:31.152 EOF 00:29:31.152 )") 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:31.152 12:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:31.152 "params": { 00:29:31.152 "name": "Nvme1", 00:29:31.152 "trtype": "tcp", 00:29:31.152 "traddr": "10.0.0.2", 00:29:31.152 "adrfam": "ipv4", 00:29:31.152 "trsvcid": "4420", 00:29:31.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:31.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:31.152 "hdgst": false, 00:29:31.152 "ddgst": false 00:29:31.152 }, 00:29:31.152 "method": "bdev_nvme_attach_controller" 00:29:31.152 }' 00:29:31.152 [2024-11-15 12:51:11.398918] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:29:31.152 [2024-11-15 12:51:11.399013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1172774 ] 00:29:31.152 [2024-11-15 12:51:11.464749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.410 [2024-11-15 12:51:11.522858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.410 Running I/O for 10 seconds... 00:29:33.724 5609.00 IOPS, 43.82 MiB/s [2024-11-15T11:51:15.003Z] 5704.50 IOPS, 44.57 MiB/s [2024-11-15T11:51:15.937Z] 5737.67 IOPS, 44.83 MiB/s [2024-11-15T11:51:16.872Z] 5738.25 IOPS, 44.83 MiB/s [2024-11-15T11:51:17.806Z] 5750.60 IOPS, 44.93 MiB/s [2024-11-15T11:51:18.741Z] 5744.67 IOPS, 44.88 MiB/s [2024-11-15T11:51:20.116Z] 5739.00 IOPS, 44.84 MiB/s [2024-11-15T11:51:21.049Z] 5738.88 IOPS, 44.83 MiB/s [2024-11-15T11:51:21.984Z] 5744.33 IOPS, 44.88 MiB/s [2024-11-15T11:51:21.984Z] 5739.10 IOPS, 44.84 MiB/s 00:29:41.640 Latency(us) 00:29:41.640 [2024-11-15T11:51:21.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.640 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:29:41.640 Verification LBA range: start 0x0 length 0x1000 00:29:41.640 Nvme1n1 : 10.02 5742.09 44.86 0.00 0.00 22230.64 4004.98 29321.29 00:29:41.640 [2024-11-15T11:51:21.984Z] =================================================================================================================== 00:29:41.640 [2024-11-15T11:51:21.984Z] Total : 5742.09 44.86 0.00 0.00 22230.64 4004.98 29321.29 00:29:41.640 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1173958 00:29:41.640 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:29:41.640 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:41.640 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:29:41.640 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:29:41.641 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:41.641 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:41.641 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.641 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.641 { 00:29:41.641 "params": { 00:29:41.641 "name": "Nvme$subsystem", 00:29:41.641 "trtype": "$TEST_TRANSPORT", 00:29:41.641 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.641 "adrfam": "ipv4", 00:29:41.641 "trsvcid": "$NVMF_PORT", 00:29:41.641 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.641 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.641 "hdgst": ${hdgst:-false}, 00:29:41.641 "ddgst": ${ddgst:-false} 00:29:41.641 }, 00:29:41.641 "method": "bdev_nvme_attach_controller" 00:29:41.641 } 00:29:41.641 EOF 00:29:41.641 )") 00:29:41.641 [2024-11-15 12:51:21.960266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.641 [2024-11-15 12:51:21.960305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.641 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:41.641 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:41.641 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:41.641 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:41.641 "params": { 00:29:41.641 "name": "Nvme1", 00:29:41.641 "trtype": "tcp", 00:29:41.641 "traddr": "10.0.0.2", 00:29:41.641 "adrfam": "ipv4", 00:29:41.641 "trsvcid": "4420", 00:29:41.641 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:41.641 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:41.641 "hdgst": false, 00:29:41.641 "ddgst": false 00:29:41.641 }, 00:29:41.641 "method": "bdev_nvme_attach_controller" 00:29:41.641 }' 00:29:41.641 [2024-11-15 12:51:21.968207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.641 [2024-11-15 12:51:21.968228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.641 [2024-11-15 12:51:21.976205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.641 [2024-11-15 12:51:21.976225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:21.984207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:21.984226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:21.992205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:21.992224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.000204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.000223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.007196] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:29:41.900 [2024-11-15 12:51:22.007280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173958 ] 00:29:41.900 [2024-11-15 12:51:22.008205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.008226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.016203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.016222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.024205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.024224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.032204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.032222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.040206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.040225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.048206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.048226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.056205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.056225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.064206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.064225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.072208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.072228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.076424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.900 [2024-11-15 12:51:22.080209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.080230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.088246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.088280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.096224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.096253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.104208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.104229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.112207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.112227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.120207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.120235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.128210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.128246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.136212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.136233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.143667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.900 [2024-11-15 12:51:22.144207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.144226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.152208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.152228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.160238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.160269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.168242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.168276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.176241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.176274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.184248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.184285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.192246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.192282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.200241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.200275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.208241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.208276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.216210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.216230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.224240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.224271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.232240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.232275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:41.900 [2024-11-15 12:51:22.240251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:41.900 [2024-11-15 12:51:22.240284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.248209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.248229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.256209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.256229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.264216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.264244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.272214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.272237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.280213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.280237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.288213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.288237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.296209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.296231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.304208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.304228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.312207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.312227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.320206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.320227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.328212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.328234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.336213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.336236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.344213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.344238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.352240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.352279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.360214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.360239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.368210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.368234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 Running I/O for 5 seconds... 00:29:42.159 [2024-11-15 12:51:22.384167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.384194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.394315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.394344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.408863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.408891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.418484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.418512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.430459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.430486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.445446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.445475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.455101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.455127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.469767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.469820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.479639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.479666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.159 [2024-11-15 12:51:22.491529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.159 [2024-11-15 12:51:22.491554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.502208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.502236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.517296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.517324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.526773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.526800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.541802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.541830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.551542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.551582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.563375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.563401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.576358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.576400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.585810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.585837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.597511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.597537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.608302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.608327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.619371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.619397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.632512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.632541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.642338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.642364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.654271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.654298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.671084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.671120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.681161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.681187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.693010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.693036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.703929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.703956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.714680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.714728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.729237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.729265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.738661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.738687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.418 [2024-11-15 12:51:22.752826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.418 [2024-11-15 12:51:22.752852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.762370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.762396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.774376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.774404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.787804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.787832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.797194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.797222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.808540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.808565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.818930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.818958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.833230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.833256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.842359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.842384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.853964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.853991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.870082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.870124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.879672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.879698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.891865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.891902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.902942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.902979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.916203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.916231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.925451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.925477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.937151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.937177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.947996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.948037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.677 [2024-11-15 12:51:22.959187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.677 [2024-11-15 12:51:22.959213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.678 [2024-11-15 12:51:22.970612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.678 [2024-11-15 12:51:22.970637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.678 [2024-11-15 12:51:22.986121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.678 [2024-11-15 12:51:22.986146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.678 [2024-11-15 12:51:22.995482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.678 [2024-11-15 12:51:22.995507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.678 [2024-11-15 12:51:23.009831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.678 [2024-11-15 12:51:23.009859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.026788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.026817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.044350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.044376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.053881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.053907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.065616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.065641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.081390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.081415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.091034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.091061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.102580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.102605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.115388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.115431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.129682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.129742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.139198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.139225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.153665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.153689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.163065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.163091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.178567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.178592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.188301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.188326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.200478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.200502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.211345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.211370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.224194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.224235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.233785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.233812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.245526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.245551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.260590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.260632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:42.936 [2024-11-15 12:51:23.269904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:42.936 [2024-11-15 12:51:23.269931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.195 [2024-11-15 12:51:23.281740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.195 [2024-11-15 12:51:23.281768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.195 [2024-11-15 12:51:23.297946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.195 [2024-11-15 12:51:23.297974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.195 [2024-11-15 12:51:23.307471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.195 [2024-11-15 12:51:23.307495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.195 [2024-11-15 12:51:23.319042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.195 [2024-11-15 12:51:23.319081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.195 [2024-11-15 12:51:23.333573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.195 [2024-11-15 12:51:23.333600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.195 [2024-11-15 12:51:23.342874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.195 [2024-11-15 12:51:23.342901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.195 [2024-11-15 12:51:23.356553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.195 [2024-11-15 12:51:23.356584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.195 [2024-11-15 12:51:23.366144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.195 [2024-11-15 12:51:23.366170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.195 11629.00 IOPS, 90.85 MiB/s [2024-11-15T11:51:23.539Z] [2024-11-15 12:51:23.377786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.195 [2024-11-15 12:51:23.377813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.195 [2024-11-15 12:51:23.393779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.195 [2024-11-15 12:51:23.393807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.195 [2024-11-15 12:51:23.403159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.195 [2024-11-15 12:51:23.403184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.195 [2024-11-15 12:51:23.418465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.195 [2024-11-15 12:51:23.418492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.196 [2024-11-15 12:51:23.428163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.196 [2024-11-15 12:51:23.428190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.196 [2024-11-15 12:51:23.440058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.196 [2024-11-15 12:51:23.440084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.196 [2024-11-15 12:51:23.450884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.196 [2024-11-15 12:51:23.450911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.196 [2024-11-15 12:51:23.465686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.196 [2024-11-15 12:51:23.465737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.196 [2024-11-15 12:51:23.475075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.196 [2024-11-15 12:51:23.475103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.196 [2024-11-15 12:51:23.489800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.196 [2024-11-15 12:51:23.489828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.196 [2024-11-15 12:51:23.499198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.196 [2024-11-15 12:51:23.499224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.196 [2024-11-15 12:51:23.512904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.196 [2024-11-15 12:51:23.512933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.196 [2024-11-15 12:51:23.522372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.196 [2024-11-15 12:51:23.522397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.196 [2024-11-15 12:51:23.534059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.196 [2024-11-15 12:51:23.534101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.454 [2024-11-15 12:51:23.549927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.454 [2024-11-15 12:51:23.549969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.454 [2024-11-15 12:51:23.559530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.454 [2024-11-15 12:51:23.559557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.454 [2024-11-15 12:51:23.571166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.454 [2024-11-15 12:51:23.571191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.454 [2024-11-15 12:51:23.583536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.454 [2024-11-15 12:51:23.583565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.454 [2024-11-15 12:51:23.597832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.454 [2024-11-15 12:51:23.597860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.454 [2024-11-15 12:51:23.607366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.454 [2024-11-15 12:51:23.607405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.454 [2024-11-15 12:51:23.619490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.454 [2024-11-15 12:51:23.619518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.454 [2024-11-15 12:51:23.630542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.454 [2024-11-15 12:51:23.630567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.454 [2024-11-15 12:51:23.646752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.454 [2024-11-15 12:51:23.646779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.454 [2024-11-15 12:51:23.656586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.454 [2024-11-15 12:51:23.656611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.454 [2024-11-15 12:51:23.668790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.454 [2024-11-15 12:51:23.668818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.454 [2024-11-15 12:51:23.679429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.455 [2024-11-15 12:51:23.679454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.455 [2024-11-15 12:51:23.689267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.455 [2024-11-15 12:51:23.689292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.455 [2024-11-15 12:51:23.701346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.455 [2024-11-15 12:51:23.701371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.455 [2024-11-15 12:51:23.711850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.455 [2024-11-15 12:51:23.711876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.455 [2024-11-15 12:51:23.725590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.455 [2024-11-15 12:51:23.725630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.455 [2024-11-15 12:51:23.735082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.455 [2024-11-15 12:51:23.735109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.455 [2024-11-15 12:51:23.749591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.455 [2024-11-15 12:51:23.749616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.455 [2024-11-15 12:51:23.760176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.455 [2024-11-15 12:51:23.760201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.455 [2024-11-15 12:51:23.770778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.455 [2024-11-15 12:51:23.770805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.455 [2024-11-15 12:51:23.786457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.455 [2024-11-15 12:51:23.786484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.804318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.804343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.813753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.813781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.825832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.825860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.837044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.837084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.847754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.847780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.860065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.860093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.869422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.869462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.881117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.881143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.891453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.891493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.906951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.906977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.922163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.922204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.931666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.931691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.943127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.943151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.957798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.957826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.967788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.967816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.979430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.979455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:23.995106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:23.995133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.713 [2024-11-15 12:51:24.009963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.713 [2024-11-15 12:51:24.009991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.714 [2024-11-15 12:51:24.019571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.714 [2024-11-15 12:51:24.019595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.714 [2024-11-15 12:51:24.031222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.714 [2024-11-15 12:51:24.031248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.714 [2024-11-15 12:51:24.045053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.714 [2024-11-15 12:51:24.045081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.714 [2024-11-15 12:51:24.054366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.714 [2024-11-15 12:51:24.054409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.065809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.065851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.080412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.080438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.089270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.089295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.100891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.100916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.111472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.111497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.122909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.122953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.138379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.138421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.148054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.148093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.159596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.159621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.169958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.169984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.185541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.185566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.195187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.195213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.210175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.210200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.219656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.219683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.231638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.231664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.245192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.245220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.254745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.254794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.269838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.269865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.279454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.279480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.291116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.291156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:43.972 [2024-11-15 12:51:24.303552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:43.972 [2024-11-15 12:51:24.303579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.317525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.317553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.326861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.326888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.340791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.340818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.349658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.349698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.361291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.361316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.371743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.371770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 11703.00 IOPS, 91.43 MiB/s [2024-11-15T11:51:24.575Z] [2024-11-15 12:51:24.385185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.385212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.394167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.394191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.405817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.405842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.416085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.416110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.426864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.426890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.441943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.441971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.451129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.451154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.462557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.462582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.476680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.476736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.486298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.486337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.498359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.498384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.512992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.513033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.522621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.522645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.537008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.537035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.546218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.546244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.557639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.557663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.231 [2024-11-15 12:51:24.568232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.231 [2024-11-15 12:51:24.568270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.579159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.579183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.593655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.593682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.602703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.602753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.617526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.617551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.627759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.627803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.640961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.640989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.650417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.650443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.661807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.661833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.671790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.671818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.683710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.683772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.694825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.694860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.710252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.710289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.719811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.719839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.731813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.731841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.745731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.745759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.755197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.755222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.769845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.769872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.779178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.779205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.793850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.793878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.804551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.804591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.815105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.815145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.490 [2024-11-15 12:51:24.827780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.490 [2024-11-15 12:51:24.827807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.748 [2024-11-15 12:51:24.840775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.748 [2024-11-15 12:51:24.840802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.748 [2024-11-15 12:51:24.850094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.748 [2024-11-15 12:51:24.850119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.748 [2024-11-15 12:51:24.861924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.748 [2024-11-15 12:51:24.861951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.748 [2024-11-15 12:51:24.879754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.748 [2024-11-15 12:51:24.879794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.748 [2024-11-15 12:51:24.889230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.748 [2024-11-15 12:51:24.889253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.748 [2024-11-15 12:51:24.900990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:24.901031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.749 [2024-11-15 12:51:24.911263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:24.911287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.749 [2024-11-15 12:51:24.925899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:24.925926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.749 [2024-11-15 12:51:24.935318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:24.935359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.749 [2024-11-15 12:51:24.947361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:24.947387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.749 [2024-11-15 12:51:24.962833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:24.962874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.749 [2024-11-15 12:51:24.978453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:24.978497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.749 [2024-11-15 12:51:24.987934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:24.987961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.749 [2024-11-15 12:51:24.999747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:24.999774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.749 [2024-11-15 12:51:25.012455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:25.012483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.749 [2024-11-15 12:51:25.021640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:25.021666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.749 [2024-11-15 12:51:25.037576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:25.037616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.749 [2024-11-15 12:51:25.047020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:25.047047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.749 [2024-11-15 12:51:25.060627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:25.060653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.749 [2024-11-15 12:51:25.069786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:25.069828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.749 [2024-11-15 12:51:25.082091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.749 [2024-11-15 12:51:25.082117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.097901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.097929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.107497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.107525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.119221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.119248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.132229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.132271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.141802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.141829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.157517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.157542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.167147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.167173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.181777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.181819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.191547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.191589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.203735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.203771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.214876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.214903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.231095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.231137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.246313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.246340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.256751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.256779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.269325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.269366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.279861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.279904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.291317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.291343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.305853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.305883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.315254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.315281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.330587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.330625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.006 [2024-11-15 12:51:25.346155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.006 [2024-11-15 12:51:25.346183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.264 [2024-11-15 12:51:25.363798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.264 [2024-11-15 12:51:25.363826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.264 [2024-11-15 12:51:25.378070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.264 [2024-11-15 12:51:25.378116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.264 11716.00 IOPS, 91.53 MiB/s [2024-11-15T11:51:25.608Z] [2024-11-15 12:51:25.387441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.264 [2024-11-15 12:51:25.387495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.264 [2024-11-15 12:51:25.399193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.264 [2024-11-15 12:51:25.399220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.264 [2024-11-15 12:51:25.411432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.264 [2024-11-15 12:51:25.411460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.264 [2024-11-15 12:51:25.425660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.264 [2024-11-15 12:51:25.425689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.264 [2024-11-15 12:51:25.435080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.264 [2024-11-15 12:51:25.435108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.264 [2024-11-15 12:51:25.449601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.264 [2024-11-15 12:51:25.449627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.264 [2024-11-15 12:51:25.458844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.264 [2024-11-15 12:51:25.458872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.264 [2024-11-15 12:51:25.473254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.264 [2024-11-15 12:51:25.473279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.264 [2024-11-15 12:51:25.483423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.264 [2024-11-15 12:51:25.483448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.264 [2024-11-15 12:51:25.499258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.264 [2024-11-15 12:51:25.499285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.264 [2024-11-15 12:51:25.514886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.264 [2024-11-15 12:51:25.514914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.264 [2024-11-15 12:51:25.530487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.264 [2024-11-15 12:51:25.530513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.264 [2024-11-15 12:51:25.540148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.264 [2024-11-15 12:51:25.540175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.265 [2024-11-15 12:51:25.552139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.265 [2024-11-15 12:51:25.552166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.265 [2024-11-15 12:51:25.562800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.265 [2024-11-15 12:51:25.562843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.265 [2024-11-15 12:51:25.577484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.265 [2024-11-15 12:51:25.577529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.265 [2024-11-15 12:51:25.586763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.265 [2024-11-15 12:51:25.586790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.265 [2024-11-15 12:51:25.601251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.265 [2024-11-15 12:51:25.601277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.611582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.611610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.624385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.624434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.633530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.633555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.645372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.645397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.656402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.656426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.667080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.667105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.679963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.679991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.693666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.693694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.702617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.702659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.718258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.718298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.728192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.728217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.740138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.740163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.750950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.750995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.766612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.766640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.776740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.776780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.790050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.790077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.799023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.799049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.814121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.523 [2024-11-15 12:51:25.814162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.523 [2024-11-15 12:51:25.823594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.524 [2024-11-15 12:51:25.823635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.524 [2024-11-15 12:51:25.835372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.524 [2024-11-15 12:51:25.835398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.524 [2024-11-15 12:51:25.849064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.524 [2024-11-15 12:51:25.849116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.524 [2024-11-15 12:51:25.858451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.524 [2024-11-15 12:51:25.858477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:25.870359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:25.870384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:25.885375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:25.885401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:25.895312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:25.895339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:25.909234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:25.909261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:25.918946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:25.918975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:25.933878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:25.933921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:25.943659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:25.943687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:25.955542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:25.955568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:25.966673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:25.966715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:25.982658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:25.982685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:25.992467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:25.992493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:26.004686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:26.004737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:26.015794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:26.015821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:26.028475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:26.028503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:26.037760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:26.037787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:26.053437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:26.053464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:26.063264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:26.063290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:26.077182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:26.077217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:26.086598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:26.086639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.782 [2024-11-15 12:51:26.098338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.782 [2024-11-15 12:51:26.098363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.783 [2024-11-15 12:51:26.112898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.783 [2024-11-15 12:51:26.112926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.783 [2024-11-15 12:51:26.122337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.783 [2024-11-15 12:51:26.122364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.133806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.133834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.148639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.148679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.158086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.158112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.170018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.170045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.185697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.185747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.195041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.195083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.210107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.210134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.226790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.226818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.242400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.242430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.251747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.251774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.263536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.263563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.274457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.274484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.290165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.290192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.299669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.299696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.311427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.311462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.323982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.324009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.333913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.333959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.346567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.346597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.362245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.362287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 [2024-11-15 12:51:26.372207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.372236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.041 11723.00 IOPS, 91.59 MiB/s [2024-11-15T11:51:26.385Z] [2024-11-15 12:51:26.384061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.041 [2024-11-15 12:51:26.384104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.299 [2024-11-15 12:51:26.394942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.299 [2024-11-15 12:51:26.394970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.299 [2024-11-15 12:51:26.410446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.299 [2024-11-15 12:51:26.410472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.299 [2024-11-15 12:51:26.420340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.299 [2024-11-15 12:51:26.420370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.432282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.432308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.443162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.443188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.454385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.454412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.470280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.470306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.488052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.488094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.497459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.497485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.509114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.509139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.519954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.519981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.530597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.530624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.546411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.546439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.555294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.555319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.569537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.569563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.579293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.579319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.590942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.590969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.605126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.605153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.614407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.614434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.626197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.626225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.300 [2024-11-15 12:51:26.641244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.300 [2024-11-15 12:51:26.641271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.650402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.650429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.661872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.661898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.677911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.677939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.686831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.686858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.700456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.700483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.710084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.710110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.721608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.721635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.732278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.732305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.743267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.743293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.756954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.756982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.766300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.766341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.778144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.778186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.794406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.794447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.803925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.803952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.815578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.815620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.829932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.829961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.839350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.839392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.853085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.853111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.862554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.862580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.878716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.878752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.888957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.889009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.558 [2024-11-15 12:51:26.900360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.558 [2024-11-15 12:51:26.900401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.816 [2024-11-15 12:51:26.910946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.816 [2024-11-15 12:51:26.910974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.816 [2024-11-15 12:51:26.923590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.816 [2024-11-15 12:51:26.923618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.816 [2024-11-15 12:51:26.937698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.816 [2024-11-15 12:51:26.937736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.816 [2024-11-15 12:51:26.946808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.816 [2024-11-15 12:51:26.946836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.816 [2024-11-15 12:51:26.962659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.816 [2024-11-15 12:51:26.962700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.816 [2024-11-15 12:51:26.977590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.816 [2024-11-15 12:51:26.977633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.816 [2024-11-15 12:51:26.987063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.816 [2024-11-15 12:51:26.987113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.816 [2024-11-15 12:51:27.001387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.816 [2024-11-15 12:51:27.001413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.816 [2024-11-15 12:51:27.010710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.816 [2024-11-15 12:51:27.010748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.816 [2024-11-15 12:51:27.022576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.816 [2024-11-15 12:51:27.022603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.816 [2024-11-15 12:51:27.035352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.816 [2024-11-15 12:51:27.035381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.816 [2024-11-15 12:51:27.048795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.816 [2024-11-15 12:51:27.048824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.816 [2024-11-15 12:51:27.058561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.816 [2024-11-15 12:51:27.058586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.816 [2024-11-15 12:51:27.070617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.816 [2024-11-15 12:51:27.070642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.816 [2024-11-15 12:51:27.086196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.816 [2024-11-15 12:51:27.086222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.817 [2024-11-15 12:51:27.095605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.817 [2024-11-15 12:51:27.095632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.817 [2024-11-15 12:51:27.107371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.817 [2024-11-15 12:51:27.107398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.817 [2024-11-15 12:51:27.118420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.817 [2024-11-15 12:51:27.118445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.817 [2024-11-15 12:51:27.131569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.817 [2024-11-15 12:51:27.131597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.817 [2024-11-15 12:51:27.145709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.817 [2024-11-15 12:51:27.145745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.817 [2024-11-15 12:51:27.155184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.817 [2024-11-15 12:51:27.155211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.169599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.169624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.179222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.179247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.193503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.193530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.203583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.203611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.215512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.215560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.229652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.229679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.239553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.239578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.251030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.251057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.263431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.263459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.273214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.273240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.285025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.285052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.295295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.295320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.309040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.309082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.318627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.318652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.334769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.334798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.351930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.351958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.362019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.362046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.375498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.375526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 11730.20 IOPS, 91.64 MiB/s [2024-11-15T11:51:27.419Z] [2024-11-15 12:51:27.388259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.388287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.396212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.396236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 00:29:47.075 Latency(us) 00:29:47.075 [2024-11-15T11:51:27.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.075 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:29:47.075 Nvme1n1 : 5.01 11730.78 91.65 0.00 0.00 10897.97 2827.76 17864.63 00:29:47.075 [2024-11-15T11:51:27.419Z] =================================================================================================================== 00:29:47.075 [2024-11-15T11:51:27.419Z] Total : 11730.78 91.65 0.00 0.00 10897.97 2827.76 17864.63 00:29:47.075 [2024-11-15 12:51:27.404209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.404240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.075 [2024-11-15 12:51:27.412210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.075 [2024-11-15 12:51:27.412234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.420234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.420263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.428271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.428315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.436266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.436307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.444262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.444304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.452261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.452303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.460263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.460307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.468270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.468313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.476267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.476310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.484271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.484315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.492287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.492336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.500273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.500324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.508272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.508318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.516264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.516308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.524263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.524304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.532263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.532306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.540207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.540226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.548205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.548225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.556205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.556225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.564211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.564233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.576292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.576349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.584262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.584302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.592213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.592234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.600205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.600225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.608204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.608223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 [2024-11-15 12:51:27.616201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.334 [2024-11-15 12:51:27.616220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1173958) - No such process 00:29:47.334 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1173958 00:29:47.334 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.334 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.334 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:47.334 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.334 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:47.334 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.334 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:47.334 delay0 00:29:47.334 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.334 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:29:47.334 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.334 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:47.334 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.335 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:29:47.593 [2024-11-15 12:51:27.735184] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:55.785 Initializing NVMe Controllers 00:29:55.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:55.785 Initialization complete. Launching workers. 00:29:55.785 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 234, failed: 26214 00:29:55.785 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26301, failed to submit 147 00:29:55.785 success 26230, unsuccessful 71, failed 0 00:29:55.785 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:29:55.785 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:29:55.785 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:55.785 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:29:55.785 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:55.785 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:29:55.785 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:55.785 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:55.785 rmmod nvme_tcp 00:29:55.785 rmmod nvme_fabrics 00:29:55.785 rmmod nvme_keyring 00:29:55.785 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:55.785 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:29:55.785 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:29:55.785 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1172730 ']' 00:29:55.785 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1172730 00:29:55.785 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1172730 ']' 00:29:55.786 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1172730 00:29:55.786 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:29:55.786 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:55.786 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1172730 00:29:55.786 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:55.786 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:55.786 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1172730' 00:29:55.786 killing process with pid 1172730 00:29:55.786 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1172730 00:29:55.786 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1172730 00:29:55.786 12:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:55.786 12:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:55.786 12:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:55.786 12:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:29:55.786 12:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:29:55.786 12:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:55.786 12:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:29:55.786 12:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:55.786 12:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:55.786 12:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.786 12:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.786 12:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:57.192 00:29:57.192 real 0m28.692s 00:29:57.192 user 0m40.811s 00:29:57.192 sys 0m9.888s 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:57.192 ************************************ 00:29:57.192 END TEST nvmf_zcopy 00:29:57.192 ************************************ 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:57.192 ************************************ 00:29:57.192 START TEST nvmf_nmic 00:29:57.192 ************************************ 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:57.192 * Looking for test storage... 00:29:57.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:57.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.192 --rc genhtml_branch_coverage=1 00:29:57.192 --rc genhtml_function_coverage=1 00:29:57.192 --rc genhtml_legend=1 00:29:57.192 --rc geninfo_all_blocks=1 00:29:57.192 --rc geninfo_unexecuted_blocks=1 00:29:57.192 00:29:57.192 ' 00:29:57.192 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:57.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.193 --rc genhtml_branch_coverage=1 00:29:57.193 --rc genhtml_function_coverage=1 00:29:57.193 --rc genhtml_legend=1 00:29:57.193 --rc geninfo_all_blocks=1 00:29:57.193 --rc geninfo_unexecuted_blocks=1 00:29:57.193 00:29:57.193 ' 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:57.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.193 --rc genhtml_branch_coverage=1 00:29:57.193 --rc genhtml_function_coverage=1 00:29:57.193 --rc genhtml_legend=1 00:29:57.193 --rc geninfo_all_blocks=1 00:29:57.193 --rc geninfo_unexecuted_blocks=1 00:29:57.193 00:29:57.193 ' 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:57.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.193 --rc genhtml_branch_coverage=1 00:29:57.193 --rc genhtml_function_coverage=1 00:29:57.193 --rc genhtml_legend=1 00:29:57.193 --rc geninfo_all_blocks=1 00:29:57.193 --rc geninfo_unexecuted_blocks=1 00:29:57.193 00:29:57.193 ' 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:29:57.193 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.727 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:59.728 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:59.728 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:59.728 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:59.728 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:29:59.728 00:29:59.728 --- 10.0.0.2 ping statistics --- 00:29:59.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.728 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:29:59.728 00:29:59.728 --- 10.0.0.1 ping statistics --- 00:29:59.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.728 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:29:59.728 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1177476 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1177476 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1177476 ']' 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.729 [2024-11-15 12:51:39.707329] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:59.729 [2024-11-15 12:51:39.708371] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:29:59.729 [2024-11-15 12:51:39.708438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.729 [2024-11-15 12:51:39.779275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:59.729 [2024-11-15 12:51:39.840523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.729 [2024-11-15 12:51:39.840578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.729 [2024-11-15 12:51:39.840606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.729 [2024-11-15 12:51:39.840617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.729 [2024-11-15 12:51:39.840627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.729 [2024-11-15 12:51:39.842322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.729 [2024-11-15 12:51:39.842388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.729 [2024-11-15 12:51:39.842454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:59.729 [2024-11-15 12:51:39.842513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.729 [2024-11-15 12:51:39.937641] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:59.729 [2024-11-15 12:51:39.937901] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:59.729 [2024-11-15 12:51:39.938165] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:59.729 [2024-11-15 12:51:39.938842] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:59.729 [2024-11-15 12:51:39.939096] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.729 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.729 [2024-11-15 12:51:39.987131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.729 Malloc0 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.729 [2024-11-15 12:51:40.055307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:29:59.729 test case1: single bdev can't be used in multiple subsystems 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.729 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.987 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.987 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:29:59.987 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:29:59.987 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.987 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.987 [2024-11-15 12:51:40.079057] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:29:59.987 [2024-11-15 12:51:40.079086] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:29:59.987 [2024-11-15 12:51:40.079117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:59.987 request: 00:29:59.987 { 00:29:59.987 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:29:59.987 "namespace": { 00:29:59.987 "bdev_name": "Malloc0", 00:29:59.987 "no_auto_visible": false 00:29:59.987 }, 00:29:59.987 "method": "nvmf_subsystem_add_ns", 00:29:59.987 "req_id": 1 00:29:59.987 } 00:29:59.987 Got JSON-RPC error response 00:29:59.987 response: 00:29:59.987 { 00:29:59.987 "code": -32602, 00:29:59.987 "message": "Invalid parameters" 00:29:59.987 } 00:29:59.987 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:59.987 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:29:59.987 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:29:59.987 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:29:59.988 Adding namespace failed - expected result. 00:29:59.988 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:29:59.988 test case2: host connect to nvmf target in multiple paths 00:29:59.988 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:59.988 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.988 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:59.988 [2024-11-15 12:51:40.087140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:59.988 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.988 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:00.246 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:30:00.246 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:00.246 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:30:00.246 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:00.246 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:00.246 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:30:02.769 12:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:02.769 12:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:02.769 12:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:02.769 12:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:02.769 12:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:02.769 12:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:30:02.769 12:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:02.769 [global] 00:30:02.769 thread=1 00:30:02.769 invalidate=1 00:30:02.769 rw=write 00:30:02.769 time_based=1 00:30:02.769 runtime=1 00:30:02.769 ioengine=libaio 00:30:02.769 direct=1 00:30:02.769 bs=4096 00:30:02.769 iodepth=1 00:30:02.769 norandommap=0 00:30:02.769 numjobs=1 00:30:02.769 00:30:02.769 verify_dump=1 00:30:02.769 verify_backlog=512 00:30:02.769 verify_state_save=0 00:30:02.769 do_verify=1 00:30:02.769 verify=crc32c-intel 00:30:02.769 [job0] 00:30:02.769 filename=/dev/nvme0n1 00:30:02.769 Could not set queue depth (nvme0n1) 00:30:02.769 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:02.769 fio-3.35 00:30:02.769 Starting 1 thread 00:30:03.702 00:30:03.702 job0: (groupid=0, jobs=1): err= 0: pid=1177858: Fri Nov 15 12:51:43 2024 00:30:03.702 read: IOPS=2400, BW=9602KiB/s (9833kB/s)(9612KiB/1001msec) 00:30:03.702 slat (nsec): min=5367, max=29770, avg=6803.88, stdev=1610.05 00:30:03.702 clat (usec): min=166, max=531, avg=227.53, stdev=28.74 00:30:03.702 lat (usec): min=173, max=539, avg=234.34, stdev=28.83 00:30:03.702 clat percentiles (usec): 00:30:03.702 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 206], 00:30:03.702 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:30:03.702 | 70.00th=[ 227], 80.00th=[ 258], 90.00th=[ 277], 95.00th=[ 281], 00:30:03.702 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 306], 99.95th=[ 412], 00:30:03.702 | 99.99th=[ 529] 00:30:03.702 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:30:03.702 slat (nsec): min=6743, max=68571, avg=9645.39, stdev=3982.08 00:30:03.702 clat (usec): min=130, max=1846, avg=156.23, stdev=37.39 00:30:03.702 lat (usec): min=137, max=1856, avg=165.87, stdev=38.69 00:30:03.702 clat percentiles (usec): 00:30:03.702 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:30:03.702 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 153], 00:30:03.702 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 178], 95.00th=[ 190], 00:30:03.702 | 99.00th=[ 241], 99.50th=[ 243], 99.90th=[ 258], 99.95th=[ 330], 00:30:03.702 | 99.99th=[ 1844] 00:30:03.702 bw ( KiB/s): min=11504, max=11504, per=100.00%, avg=11504.00, stdev= 0.00, samples=1 00:30:03.702 iops : min= 2876, max= 2876, avg=2876.00, stdev= 0.00, samples=1 00:30:03.702 lat (usec) : 250=89.18%, 500=10.78%, 750=0.02% 00:30:03.702 lat (msec) : 2=0.02% 00:30:03.702 cpu : usr=2.90%, sys=5.80%, ctx=4964, majf=0, minf=1 00:30:03.702 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:03.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.702 issued rwts: total=2403,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:03.702 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:03.702 00:30:03.702 Run status group 0 (all jobs): 00:30:03.702 READ: bw=9602KiB/s (9833kB/s), 9602KiB/s-9602KiB/s (9833kB/s-9833kB/s), io=9612KiB (9843kB), run=1001-1001msec 00:30:03.702 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:30:03.702 00:30:03.702 Disk stats (read/write): 00:30:03.702 nvme0n1: ios=2098/2421, merge=0/0, ticks=465/376, in_queue=841, util=91.38% 00:30:03.702 12:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:03.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:03.702 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:03.702 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:30:03.702 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:03.702 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:03.702 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:03.702 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:03.960 rmmod nvme_tcp 00:30:03.960 rmmod nvme_fabrics 00:30:03.960 rmmod nvme_keyring 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1177476 ']' 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1177476 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1177476 ']' 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1177476 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1177476 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1177476' 00:30:03.960 killing process with pid 1177476 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1177476 00:30:03.960 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1177476 00:30:04.220 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:04.220 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:04.220 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:04.220 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:30:04.220 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:30:04.220 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:04.220 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:30:04.220 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:04.220 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:04.220 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.220 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.220 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.122 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:06.122 00:30:06.122 real 0m9.162s 00:30:06.122 user 0m17.131s 00:30:06.122 sys 0m3.434s 00:30:06.122 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.122 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:06.122 ************************************ 00:30:06.122 END TEST nvmf_nmic 00:30:06.122 ************************************ 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:06.382 ************************************ 00:30:06.382 START TEST nvmf_fio_target 00:30:06.382 ************************************ 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:06.382 * Looking for test storage... 00:30:06.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:06.382 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:06.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.383 --rc genhtml_branch_coverage=1 00:30:06.383 --rc genhtml_function_coverage=1 00:30:06.383 --rc genhtml_legend=1 00:30:06.383 --rc geninfo_all_blocks=1 00:30:06.383 --rc geninfo_unexecuted_blocks=1 00:30:06.383 00:30:06.383 ' 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:06.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.383 --rc genhtml_branch_coverage=1 00:30:06.383 --rc genhtml_function_coverage=1 00:30:06.383 --rc genhtml_legend=1 00:30:06.383 --rc geninfo_all_blocks=1 00:30:06.383 --rc geninfo_unexecuted_blocks=1 00:30:06.383 00:30:06.383 ' 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:06.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.383 --rc genhtml_branch_coverage=1 00:30:06.383 --rc genhtml_function_coverage=1 00:30:06.383 --rc genhtml_legend=1 00:30:06.383 --rc geninfo_all_blocks=1 00:30:06.383 --rc geninfo_unexecuted_blocks=1 00:30:06.383 00:30:06.383 ' 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:06.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.383 --rc genhtml_branch_coverage=1 00:30:06.383 --rc genhtml_function_coverage=1 00:30:06.383 --rc genhtml_legend=1 00:30:06.383 --rc geninfo_all_blocks=1 00:30:06.383 --rc geninfo_unexecuted_blocks=1 00:30:06.383 00:30:06.383 ' 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:06.383 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.914 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:08.915 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:08.915 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:08.915 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:08.915 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:08.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:30:08.915 00:30:08.915 --- 10.0.0.2 ping statistics --- 00:30:08.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.915 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:30:08.915 00:30:08.915 --- 10.0.0.1 ping statistics --- 00:30:08.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.915 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:08.915 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:08.915 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:08.915 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:08.915 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:08.915 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:08.916 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1180058 00:30:08.916 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:08.916 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1180058 00:30:08.916 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1180058 ']' 00:30:08.916 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.916 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.916 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.916 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.916 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:08.916 [2024-11-15 12:51:49.069912] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:08.916 [2024-11-15 12:51:49.070987] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:30:08.916 [2024-11-15 12:51:49.071067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.916 [2024-11-15 12:51:49.142166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:08.916 [2024-11-15 12:51:49.200804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.916 [2024-11-15 12:51:49.200859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.916 [2024-11-15 12:51:49.200883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.916 [2024-11-15 12:51:49.200894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.916 [2024-11-15 12:51:49.200904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.916 [2024-11-15 12:51:49.202486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.916 [2024-11-15 12:51:49.202544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:08.916 [2024-11-15 12:51:49.202609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:08.916 [2024-11-15 12:51:49.202613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.175 [2024-11-15 12:51:49.289990] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:09.175 [2024-11-15 12:51:49.290184] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:09.175 [2024-11-15 12:51:49.290479] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:09.175 [2024-11-15 12:51:49.291131] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:09.175 [2024-11-15 12:51:49.291376] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:09.175 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.175 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:30:09.175 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:09.175 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:09.175 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:09.175 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.175 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:09.434 [2024-11-15 12:51:49.583380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.434 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:09.694 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:09.694 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:09.952 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:09.952 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:10.210 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:10.210 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:10.776 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:10.776 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:10.776 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:11.342 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:11.342 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:11.600 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:11.600 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:11.858 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:11.858 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:12.116 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:12.374 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:12.374 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:12.632 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:12.632 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:12.889 12:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:13.147 [2024-11-15 12:51:53.391497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.147 12:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:13.405 12:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:13.662 12:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:13.919 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:13.919 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:30:13.919 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:13.919 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:30:13.919 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:30:13.919 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:30:15.816 12:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:15.816 12:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:15.816 12:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:15.816 12:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:30:15.816 12:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:15.816 12:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:30:15.816 12:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:15.816 [global] 00:30:15.816 thread=1 00:30:15.816 invalidate=1 00:30:15.816 rw=write 00:30:15.816 time_based=1 00:30:15.816 runtime=1 00:30:15.816 ioengine=libaio 00:30:15.816 direct=1 00:30:15.816 bs=4096 00:30:15.816 iodepth=1 00:30:15.816 norandommap=0 00:30:15.816 numjobs=1 00:30:15.816 00:30:15.816 verify_dump=1 00:30:15.816 verify_backlog=512 00:30:15.816 verify_state_save=0 00:30:15.816 do_verify=1 00:30:15.816 verify=crc32c-intel 00:30:15.816 [job0] 00:30:15.816 filename=/dev/nvme0n1 00:30:15.816 [job1] 00:30:15.816 filename=/dev/nvme0n2 00:30:15.816 [job2] 00:30:15.816 filename=/dev/nvme0n3 00:30:15.816 [job3] 00:30:15.816 filename=/dev/nvme0n4 00:30:16.074 Could not set queue depth (nvme0n1) 00:30:16.074 Could not set queue depth (nvme0n2) 00:30:16.074 Could not set queue depth (nvme0n3) 00:30:16.074 Could not set queue depth (nvme0n4) 00:30:16.074 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:16.074 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:16.074 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:16.074 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:16.074 fio-3.35 00:30:16.074 Starting 4 threads 00:30:17.447 00:30:17.447 job0: (groupid=0, jobs=1): err= 0: pid=1181012: Fri Nov 15 12:51:57 2024 00:30:17.447 read: IOPS=965, BW=3864KiB/s (3956kB/s)(3968KiB/1027msec) 00:30:17.447 slat (nsec): min=5281, max=51921, avg=15209.51, stdev=6880.94 00:30:17.447 clat (usec): min=208, max=42383, avg=700.73, stdev=3478.61 00:30:17.447 lat (usec): min=215, max=42401, avg=715.94, stdev=3479.21 00:30:17.447 clat percentiles (usec): 00:30:17.447 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 258], 20.00th=[ 297], 00:30:17.447 | 30.00th=[ 334], 40.00th=[ 367], 50.00th=[ 388], 60.00th=[ 412], 00:30:17.447 | 70.00th=[ 465], 80.00th=[ 502], 90.00th=[ 562], 95.00th=[ 644], 00:30:17.447 | 99.00th=[ 914], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:17.447 | 99.99th=[42206] 00:30:17.447 write: IOPS=997, BW=3988KiB/s (4084kB/s)(4096KiB/1027msec); 0 zone resets 00:30:17.447 slat (nsec): min=7943, max=52659, avg=22014.74, stdev=7733.79 00:30:17.447 clat (usec): min=154, max=1147, avg=276.03, stdev=83.57 00:30:17.447 lat (usec): min=165, max=1168, avg=298.05, stdev=84.94 00:30:17.447 clat percentiles (usec): 00:30:17.447 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 200], 00:30:17.447 | 30.00th=[ 212], 40.00th=[ 231], 50.00th=[ 273], 60.00th=[ 289], 00:30:17.447 | 70.00th=[ 306], 80.00th=[ 334], 90.00th=[ 396], 95.00th=[ 429], 00:30:17.447 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 523], 99.95th=[ 1156], 00:30:17.447 | 99.99th=[ 1156] 00:30:17.447 bw ( KiB/s): min= 4096, max= 4096, per=24.02%, avg=4096.00, stdev= 0.00, samples=2 00:30:17.447 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:30:17.447 lat (usec) : 250=26.69%, 500=62.35%, 750=9.72%, 1000=0.74% 00:30:17.447 lat (msec) : 2=0.15%, 50=0.35% 00:30:17.447 cpu : usr=3.22%, sys=4.39%, ctx=2016, majf=0, minf=2 00:30:17.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:17.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.447 issued rwts: total=992,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:17.447 job1: (groupid=0, jobs=1): err= 0: pid=1181013: Fri Nov 15 12:51:57 2024 00:30:17.447 read: IOPS=22, BW=91.1KiB/s (93.3kB/s)(92.0KiB/1010msec) 00:30:17.447 slat (nsec): min=10293, max=33790, avg=24675.00, stdev=8758.63 00:30:17.447 clat (usec): min=248, max=41014, avg=39163.84, stdev=8483.96 00:30:17.447 lat (usec): min=266, max=41031, avg=39188.51, stdev=8485.48 00:30:17.447 clat percentiles (usec): 00:30:17.447 | 1.00th=[ 249], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:30:17.447 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:17.447 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:17.447 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:17.447 | 99.99th=[41157] 00:30:17.447 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:30:17.447 slat (nsec): min=7749, max=52019, avg=17771.03, stdev=6791.94 00:30:17.447 clat (usec): min=151, max=277, avg=188.70, stdev=14.20 00:30:17.447 lat (usec): min=160, max=312, avg=206.47, stdev=18.02 00:30:17.447 clat percentiles (usec): 00:30:17.447 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 180], 00:30:17.447 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 192], 00:30:17.447 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 208], 00:30:17.447 | 99.00th=[ 227], 99.50th=[ 239], 99.90th=[ 277], 99.95th=[ 277], 00:30:17.447 | 99.99th=[ 277] 00:30:17.447 bw ( KiB/s): min= 4096, max= 4096, per=24.02%, avg=4096.00, stdev= 0.00, samples=1 00:30:17.447 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:17.447 lat (usec) : 250=95.70%, 500=0.19% 00:30:17.447 lat (msec) : 50=4.11% 00:30:17.447 cpu : usr=0.79%, sys=1.09%, ctx=535, majf=0, minf=2 00:30:17.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:17.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.447 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:17.447 job2: (groupid=0, jobs=1): err= 0: pid=1181014: Fri Nov 15 12:51:57 2024 00:30:17.447 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:30:17.447 slat (nsec): min=7471, max=58206, avg=16858.22, stdev=6701.10 00:30:17.447 clat (usec): min=252, max=42307, avg=582.42, stdev=2562.98 00:30:17.447 lat (usec): min=263, max=42326, avg=599.27, stdev=2563.17 00:30:17.447 clat percentiles (usec): 00:30:17.447 | 1.00th=[ 277], 5.00th=[ 306], 10.00th=[ 330], 20.00th=[ 351], 00:30:17.447 | 30.00th=[ 367], 40.00th=[ 383], 50.00th=[ 400], 60.00th=[ 424], 00:30:17.447 | 70.00th=[ 461], 80.00th=[ 498], 90.00th=[ 537], 95.00th=[ 578], 00:30:17.447 | 99.00th=[ 791], 99.50th=[ 1139], 99.90th=[42206], 99.95th=[42206], 00:30:17.447 | 99.99th=[42206] 00:30:17.447 write: IOPS=1305, BW=5223KiB/s (5348kB/s)(5228KiB/1001msec); 0 zone resets 00:30:17.447 slat (nsec): min=9625, max=66579, avg=22893.61, stdev=8629.04 00:30:17.447 clat (usec): min=166, max=510, avg=263.35, stdev=65.21 00:30:17.447 lat (usec): min=176, max=552, avg=286.24, stdev=67.57 00:30:17.447 clat percentiles (usec): 00:30:17.447 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 194], 20.00th=[ 208], 00:30:17.447 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 247], 60.00th=[ 262], 00:30:17.447 | 70.00th=[ 285], 80.00th=[ 310], 90.00th=[ 359], 95.00th=[ 404], 00:30:17.447 | 99.00th=[ 441], 99.50th=[ 478], 99.90th=[ 510], 99.95th=[ 510], 00:30:17.447 | 99.99th=[ 510] 00:30:17.447 bw ( KiB/s): min= 5808, max= 5808, per=34.05%, avg=5808.00, stdev= 0.00, samples=1 00:30:17.447 iops : min= 1452, max= 1452, avg=1452.00, stdev= 0.00, samples=1 00:30:17.447 lat (usec) : 250=30.03%, 500=61.69%, 750=7.81%, 1000=0.21% 00:30:17.447 lat (msec) : 2=0.09%, 50=0.17% 00:30:17.447 cpu : usr=3.90%, sys=5.60%, ctx=2336, majf=0, minf=1 00:30:17.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:17.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.447 issued rwts: total=1024,1307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:17.447 job3: (groupid=0, jobs=1): err= 0: pid=1181015: Fri Nov 15 12:51:57 2024 00:30:17.447 read: IOPS=1007, BW=4031KiB/s (4128kB/s)(4132KiB/1025msec) 00:30:17.447 slat (nsec): min=7092, max=54540, avg=15348.85, stdev=6002.29 00:30:17.447 clat (usec): min=246, max=41380, avg=504.24, stdev=1795.08 00:30:17.447 lat (usec): min=255, max=41401, avg=519.59, stdev=1795.32 00:30:17.447 clat percentiles (usec): 00:30:17.447 | 1.00th=[ 265], 5.00th=[ 297], 10.00th=[ 314], 20.00th=[ 343], 00:30:17.447 | 30.00th=[ 367], 40.00th=[ 396], 50.00th=[ 416], 60.00th=[ 433], 00:30:17.447 | 70.00th=[ 453], 80.00th=[ 494], 90.00th=[ 553], 95.00th=[ 611], 00:30:17.447 | 99.00th=[ 725], 99.50th=[ 775], 99.90th=[41157], 99.95th=[41157], 00:30:17.447 | 99.99th=[41157] 00:30:17.447 write: IOPS=1498, BW=5994KiB/s (6138kB/s)(6144KiB/1025msec); 0 zone resets 00:30:17.447 slat (nsec): min=7072, max=67392, avg=22429.57, stdev=9368.51 00:30:17.447 clat (usec): min=157, max=507, avg=286.34, stdev=66.55 00:30:17.447 lat (usec): min=165, max=533, avg=308.77, stdev=68.26 00:30:17.447 clat percentiles (usec): 00:30:17.447 | 1.00th=[ 169], 5.00th=[ 206], 10.00th=[ 221], 20.00th=[ 233], 00:30:17.447 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 269], 60.00th=[ 293], 00:30:17.447 | 70.00th=[ 310], 80.00th=[ 347], 90.00th=[ 392], 95.00th=[ 416], 00:30:17.447 | 99.00th=[ 457], 99.50th=[ 465], 99.90th=[ 494], 99.95th=[ 506], 00:30:17.447 | 99.99th=[ 506] 00:30:17.447 bw ( KiB/s): min= 5680, max= 6608, per=36.02%, avg=6144.00, stdev=656.20, samples=2 00:30:17.447 iops : min= 1420, max= 1652, avg=1536.00, stdev=164.05, samples=2 00:30:17.447 lat (usec) : 250=23.55%, 500=68.74%, 750=7.43%, 1000=0.16% 00:30:17.447 lat (msec) : 2=0.04%, 50=0.08% 00:30:17.447 cpu : usr=4.98%, sys=5.08%, ctx=2570, majf=0, minf=1 00:30:17.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:17.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.447 issued rwts: total=1033,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:17.447 00:30:17.447 Run status group 0 (all jobs): 00:30:17.447 READ: bw=11.7MiB/s (12.3MB/s), 91.1KiB/s-4092KiB/s (93.3kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1027msec 00:30:17.447 WRITE: bw=16.7MiB/s (17.5MB/s), 2028KiB/s-5994KiB/s (2076kB/s-6138kB/s), io=17.1MiB (17.9MB), run=1001-1027msec 00:30:17.447 00:30:17.447 Disk stats (read/write): 00:30:17.447 nvme0n1: ios=768/1024, merge=0/0, ticks=527/267, in_queue=794, util=86.47% 00:30:17.447 nvme0n2: ios=18/512, merge=0/0, ticks=738/90, in_queue=828, util=86.57% 00:30:17.447 nvme0n3: ios=867/1024, merge=0/0, ticks=1489/265, in_queue=1754, util=97.91% 00:30:17.447 nvme0n4: ios=1081/1238, merge=0/0, ticks=1389/322, in_queue=1711, util=97.89% 00:30:17.447 12:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:17.447 [global] 00:30:17.447 thread=1 00:30:17.447 invalidate=1 00:30:17.447 rw=randwrite 00:30:17.447 time_based=1 00:30:17.447 runtime=1 00:30:17.448 ioengine=libaio 00:30:17.448 direct=1 00:30:17.448 bs=4096 00:30:17.448 iodepth=1 00:30:17.448 norandommap=0 00:30:17.448 numjobs=1 00:30:17.448 00:30:17.448 verify_dump=1 00:30:17.448 verify_backlog=512 00:30:17.448 verify_state_save=0 00:30:17.448 do_verify=1 00:30:17.448 verify=crc32c-intel 00:30:17.448 [job0] 00:30:17.448 filename=/dev/nvme0n1 00:30:17.448 [job1] 00:30:17.448 filename=/dev/nvme0n2 00:30:17.448 [job2] 00:30:17.448 filename=/dev/nvme0n3 00:30:17.448 [job3] 00:30:17.448 filename=/dev/nvme0n4 00:30:17.448 Could not set queue depth (nvme0n1) 00:30:17.448 Could not set queue depth (nvme0n2) 00:30:17.448 Could not set queue depth (nvme0n3) 00:30:17.448 Could not set queue depth (nvme0n4) 00:30:17.705 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:17.705 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:17.705 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:17.705 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:17.705 fio-3.35 00:30:17.705 Starting 4 threads 00:30:19.079 00:30:19.079 job0: (groupid=0, jobs=1): err= 0: pid=1181239: Fri Nov 15 12:51:59 2024 00:30:19.079 read: IOPS=1721, BW=6887KiB/s (7052kB/s)(7100KiB/1031msec) 00:30:19.079 slat (nsec): min=5318, max=65844, avg=10584.58, stdev=5140.91 00:30:19.079 clat (usec): min=212, max=41070, avg=297.49, stdev=968.98 00:30:19.079 lat (usec): min=220, max=41136, avg=308.07, stdev=970.21 00:30:19.079 clat percentiles (usec): 00:30:19.079 | 1.00th=[ 227], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 239], 00:30:19.079 | 30.00th=[ 247], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:30:19.079 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 314], 00:30:19.079 | 99.00th=[ 416], 99.50th=[ 445], 99.90th=[ 486], 99.95th=[41157], 00:30:19.079 | 99.99th=[41157] 00:30:19.079 write: IOPS=1986, BW=7946KiB/s (8136kB/s)(8192KiB/1031msec); 0 zone resets 00:30:19.079 slat (nsec): min=6478, max=66754, avg=12847.84, stdev=6611.11 00:30:19.079 clat (usec): min=140, max=2788, avg=216.18, stdev=70.16 00:30:19.079 lat (usec): min=147, max=2796, avg=229.03, stdev=69.79 00:30:19.079 clat percentiles (usec): 00:30:19.080 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 176], 00:30:19.080 | 30.00th=[ 184], 40.00th=[ 204], 50.00th=[ 215], 60.00th=[ 229], 00:30:19.080 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 258], 95.00th=[ 273], 00:30:19.080 | 99.00th=[ 338], 99.50th=[ 396], 99.90th=[ 586], 99.95th=[ 635], 00:30:19.080 | 99.99th=[ 2802] 00:30:19.080 bw ( KiB/s): min= 8175, max= 8192, per=45.77%, avg=8183.50, stdev=12.02, samples=2 00:30:19.080 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:30:19.080 lat (usec) : 250=61.13%, 500=38.77%, 750=0.05% 00:30:19.080 lat (msec) : 4=0.03%, 50=0.03% 00:30:19.080 cpu : usr=3.20%, sys=6.21%, ctx=3824, majf=0, minf=2 00:30:19.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:19.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.080 issued rwts: total=1775,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:19.080 job1: (groupid=0, jobs=1): err= 0: pid=1181244: Fri Nov 15 12:51:59 2024 00:30:19.080 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:30:19.080 slat (nsec): min=9311, max=35607, avg=20436.23, stdev=10134.36 00:30:19.080 clat (usec): min=40909, max=41026, avg=40972.88, stdev=34.25 00:30:19.080 lat (usec): min=40930, max=41045, avg=40993.32, stdev=29.02 00:30:19.080 clat percentiles (usec): 00:30:19.080 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:19.080 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:19.080 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:19.080 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:19.080 | 99.99th=[41157] 00:30:19.080 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:30:19.080 slat (nsec): min=7616, max=32710, avg=9564.60, stdev=3002.73 00:30:19.080 clat (usec): min=151, max=324, avg=226.44, stdev=29.88 00:30:19.080 lat (usec): min=160, max=332, avg=236.01, stdev=29.61 00:30:19.080 clat percentiles (usec): 00:30:19.080 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 198], 00:30:19.080 | 30.00th=[ 227], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 243], 00:30:19.080 | 70.00th=[ 245], 80.00th=[ 245], 90.00th=[ 247], 95.00th=[ 253], 00:30:19.080 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 326], 99.95th=[ 326], 00:30:19.080 | 99.99th=[ 326] 00:30:19.080 bw ( KiB/s): min= 4087, max= 4087, per=22.86%, avg=4087.00, stdev= 0.00, samples=1 00:30:19.080 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:30:19.080 lat (usec) : 250=89.89%, 500=5.99% 00:30:19.080 lat (msec) : 50=4.12% 00:30:19.080 cpu : usr=0.29%, sys=0.68%, ctx=538, majf=0, minf=1 00:30:19.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:19.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.080 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:19.080 job2: (groupid=0, jobs=1): err= 0: pid=1181261: Fri Nov 15 12:51:59 2024 00:30:19.080 read: IOPS=20, BW=83.0KiB/s (85.0kB/s)(84.0KiB/1012msec) 00:30:19.080 slat (nsec): min=13301, max=35841, avg=22691.67, stdev=10090.30 00:30:19.080 clat (usec): min=40897, max=43981, avg=41119.25, stdev=657.94 00:30:19.080 lat (usec): min=40932, max=43999, avg=41141.94, stdev=656.59 00:30:19.080 clat percentiles (usec): 00:30:19.080 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:19.080 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:19.080 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:19.080 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:30:19.080 | 99.99th=[43779] 00:30:19.080 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:30:19.080 slat (nsec): min=7560, max=39745, avg=10538.15, stdev=3908.78 00:30:19.080 clat (usec): min=170, max=472, avg=273.73, stdev=59.60 00:30:19.080 lat (usec): min=180, max=481, avg=284.26, stdev=59.94 00:30:19.080 clat percentiles (usec): 00:30:19.080 | 1.00th=[ 180], 5.00th=[ 219], 10.00th=[ 231], 20.00th=[ 237], 00:30:19.080 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:30:19.080 | 70.00th=[ 269], 80.00th=[ 293], 90.00th=[ 388], 95.00th=[ 408], 00:30:19.080 | 99.00th=[ 453], 99.50th=[ 465], 99.90th=[ 474], 99.95th=[ 474], 00:30:19.080 | 99.99th=[ 474] 00:30:19.080 bw ( KiB/s): min= 4096, max= 4096, per=22.91%, avg=4096.00, stdev= 0.00, samples=1 00:30:19.080 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:19.080 lat (usec) : 250=40.34%, 500=55.72% 00:30:19.080 lat (msec) : 50=3.94% 00:30:19.080 cpu : usr=0.20%, sys=0.89%, ctx=533, majf=0, minf=2 00:30:19.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:19.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.080 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:19.080 job3: (groupid=0, jobs=1): err= 0: pid=1181272: Fri Nov 15 12:51:59 2024 00:30:19.080 read: IOPS=1260, BW=5043KiB/s (5164kB/s)(5048KiB/1001msec) 00:30:19.080 slat (nsec): min=4672, max=38181, avg=7057.79, stdev=4064.29 00:30:19.080 clat (usec): min=196, max=41255, avg=538.32, stdev=3380.96 00:30:19.080 lat (usec): min=201, max=41272, avg=545.38, stdev=3382.69 00:30:19.080 clat percentiles (usec): 00:30:19.080 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 229], 00:30:19.080 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 245], 00:30:19.080 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 285], 95.00th=[ 379], 00:30:19.080 | 99.00th=[ 537], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:30:19.080 | 99.99th=[41157] 00:30:19.080 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:30:19.080 slat (nsec): min=6413, max=29036, avg=7989.31, stdev=2600.38 00:30:19.080 clat (usec): min=157, max=1908, avg=191.07, stdev=49.76 00:30:19.080 lat (usec): min=164, max=1915, avg=199.06, stdev=49.92 00:30:19.080 clat percentiles (usec): 00:30:19.080 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:30:19.080 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 192], 00:30:19.080 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 233], 00:30:19.080 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 388], 99.95th=[ 1909], 00:30:19.080 | 99.99th=[ 1909] 00:30:19.080 bw ( KiB/s): min= 9493, max= 9493, per=53.10%, avg=9493.00, stdev= 0.00, samples=1 00:30:19.080 iops : min= 2373, max= 2373, avg=2373.00, stdev= 0.00, samples=1 00:30:19.080 lat (usec) : 250=86.42%, 500=12.72%, 750=0.50% 00:30:19.080 lat (msec) : 2=0.04%, 50=0.32% 00:30:19.080 cpu : usr=1.00%, sys=2.20%, ctx=2799, majf=0, minf=1 00:30:19.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:19.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.080 issued rwts: total=1262,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:19.080 00:30:19.080 Run status group 0 (all jobs): 00:30:19.080 READ: bw=11.7MiB/s (12.2MB/s), 83.0KiB/s-6887KiB/s (85.0kB/s-7052kB/s), io=12.0MiB (12.6MB), run=1001-1031msec 00:30:19.080 WRITE: bw=17.5MiB/s (18.3MB/s), 1998KiB/s-7946KiB/s (2046kB/s-8136kB/s), io=18.0MiB (18.9MB), run=1001-1031msec 00:30:19.080 00:30:19.080 Disk stats (read/write): 00:30:19.080 nvme0n1: ios=1586/1760, merge=0/0, ticks=425/353, in_queue=778, util=86.67% 00:30:19.080 nvme0n2: ios=41/512, merge=0/0, ticks=1688/111, in_queue=1799, util=98.37% 00:30:19.080 nvme0n3: ios=38/512, merge=0/0, ticks=885/132, in_queue=1017, util=90.60% 00:30:19.080 nvme0n4: ios=1215/1536, merge=0/0, ticks=1453/279, in_queue=1732, util=99.47% 00:30:19.080 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:30:19.080 [global] 00:30:19.080 thread=1 00:30:19.080 invalidate=1 00:30:19.080 rw=write 00:30:19.080 time_based=1 00:30:19.080 runtime=1 00:30:19.080 ioengine=libaio 00:30:19.080 direct=1 00:30:19.080 bs=4096 00:30:19.080 iodepth=128 00:30:19.080 norandommap=0 00:30:19.080 numjobs=1 00:30:19.080 00:30:19.080 verify_dump=1 00:30:19.080 verify_backlog=512 00:30:19.080 verify_state_save=0 00:30:19.080 do_verify=1 00:30:19.080 verify=crc32c-intel 00:30:19.080 [job0] 00:30:19.080 filename=/dev/nvme0n1 00:30:19.080 [job1] 00:30:19.080 filename=/dev/nvme0n2 00:30:19.080 [job2] 00:30:19.080 filename=/dev/nvme0n3 00:30:19.080 [job3] 00:30:19.080 filename=/dev/nvme0n4 00:30:19.080 Could not set queue depth (nvme0n1) 00:30:19.080 Could not set queue depth (nvme0n2) 00:30:19.080 Could not set queue depth (nvme0n3) 00:30:19.080 Could not set queue depth (nvme0n4) 00:30:19.080 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:19.080 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:19.080 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:19.080 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:19.080 fio-3.35 00:30:19.080 Starting 4 threads 00:30:20.455 00:30:20.455 job0: (groupid=0, jobs=1): err= 0: pid=1181582: Fri Nov 15 12:52:00 2024 00:30:20.455 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:30:20.455 slat (usec): min=2, max=43413, avg=98.82, stdev=782.06 00:30:20.455 clat (usec): min=8490, max=52296, avg=12668.52, stdev=6830.78 00:30:20.455 lat (usec): min=8673, max=52300, avg=12767.34, stdev=6848.16 00:30:20.455 clat percentiles (usec): 00:30:20.455 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10814], 00:30:20.455 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:30:20.455 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12780], 95.00th=[14484], 00:30:20.455 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:30:20.455 | 99.99th=[52167] 00:30:20.455 write: IOPS=5312, BW=20.8MiB/s (21.8MB/s)(20.8MiB/1003msec); 0 zone resets 00:30:20.455 slat (usec): min=3, max=12766, avg=85.90, stdev=429.29 00:30:20.455 clat (usec): min=2191, max=36459, avg=11629.62, stdev=3950.93 00:30:20.455 lat (usec): min=2816, max=36465, avg=11715.52, stdev=3951.49 00:30:20.455 clat percentiles (usec): 00:30:20.455 | 1.00th=[ 7046], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10552], 00:30:20.455 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:30:20.455 | 70.00th=[11338], 80.00th=[11469], 90.00th=[12125], 95.00th=[13173], 00:30:20.455 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:30:20.455 | 99.99th=[36439] 00:30:20.455 bw ( KiB/s): min=17136, max=24513, per=32.19%, avg=20824.50, stdev=5216.33, samples=2 00:30:20.455 iops : min= 4284, max= 6128, avg=5206.00, stdev=1303.90, samples=2 00:30:20.455 lat (msec) : 4=0.16%, 10=10.71%, 20=85.48%, 50=2.46%, 100=1.19% 00:30:20.455 cpu : usr=5.49%, sys=6.49%, ctx=605, majf=0, minf=1 00:30:20.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:30:20.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:20.455 issued rwts: total=5120,5328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:20.455 job1: (groupid=0, jobs=1): err= 0: pid=1181588: Fri Nov 15 12:52:00 2024 00:30:20.455 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:30:20.455 slat (usec): min=2, max=25076, avg=180.85, stdev=1324.96 00:30:20.455 clat (usec): min=7800, max=75060, avg=20956.17, stdev=14382.75 00:30:20.455 lat (usec): min=7809, max=75064, avg=21137.02, stdev=14496.13 00:30:20.455 clat percentiles (usec): 00:30:20.455 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[10683], 20.00th=[11076], 00:30:20.455 | 30.00th=[12518], 40.00th=[14353], 50.00th=[15664], 60.00th=[17171], 00:30:20.455 | 70.00th=[17957], 80.00th=[29754], 90.00th=[46400], 95.00th=[52691], 00:30:20.455 | 99.00th=[68682], 99.50th=[73925], 99.90th=[74974], 99.95th=[74974], 00:30:20.455 | 99.99th=[74974] 00:30:20.455 write: IOPS=2957, BW=11.6MiB/s (12.1MB/s)(11.7MiB/1009msec); 0 zone resets 00:30:20.455 slat (usec): min=2, max=27292, avg=175.96, stdev=1022.91 00:30:20.455 clat (usec): min=3791, max=96579, avg=24317.61, stdev=16871.10 00:30:20.455 lat (usec): min=7465, max=96589, avg=24493.56, stdev=16934.64 00:30:20.455 clat percentiles (usec): 00:30:20.455 | 1.00th=[10159], 5.00th=[10421], 10.00th=[10683], 20.00th=[10945], 00:30:20.455 | 30.00th=[13304], 40.00th=[15401], 50.00th=[18482], 60.00th=[23987], 00:30:20.455 | 70.00th=[24773], 80.00th=[33817], 90.00th=[47973], 95.00th=[59507], 00:30:20.455 | 99.00th=[91751], 99.50th=[93848], 99.90th=[96994], 99.95th=[96994], 00:30:20.455 | 99.99th=[96994] 00:30:20.455 bw ( KiB/s): min= 7440, max=15408, per=17.66%, avg=11424.00, stdev=5634.23, samples=2 00:30:20.455 iops : min= 1860, max= 3852, avg=2856.00, stdev=1408.56, samples=2 00:30:20.455 lat (msec) : 4=0.02%, 10=2.25%, 20=60.98%, 50=28.39%, 100=8.35% 00:30:20.455 cpu : usr=1.59%, sys=3.47%, ctx=304, majf=0, minf=1 00:30:20.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:30:20.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:20.455 issued rwts: total=2560,2984,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:20.455 job2: (groupid=0, jobs=1): err= 0: pid=1181590: Fri Nov 15 12:52:00 2024 00:30:20.455 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:30:20.455 slat (usec): min=3, max=24809, avg=136.55, stdev=1088.77 00:30:20.455 clat (usec): min=4573, max=56641, avg=16949.81, stdev=6492.49 00:30:20.455 lat (usec): min=4580, max=56647, avg=17086.36, stdev=6573.85 00:30:20.455 clat percentiles (usec): 00:30:20.455 | 1.00th=[ 7177], 5.00th=[10552], 10.00th=[10683], 20.00th=[11731], 00:30:20.456 | 30.00th=[12780], 40.00th=[13829], 50.00th=[14615], 60.00th=[17171], 00:30:20.456 | 70.00th=[18482], 80.00th=[21627], 90.00th=[26346], 95.00th=[31851], 00:30:20.456 | 99.00th=[37487], 99.50th=[37487], 99.90th=[37487], 99.95th=[39060], 00:30:20.456 | 99.99th=[56886] 00:30:20.456 write: IOPS=3882, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1007msec); 0 zone resets 00:30:20.456 slat (usec): min=4, max=22067, avg=125.18, stdev=823.47 00:30:20.456 clat (usec): min=2937, max=38809, avg=17129.15, stdev=7153.38 00:30:20.456 lat (usec): min=2947, max=38816, avg=17254.33, stdev=7209.50 00:30:20.456 clat percentiles (usec): 00:30:20.456 | 1.00th=[ 5735], 5.00th=[ 7963], 10.00th=[10290], 20.00th=[11338], 00:30:20.456 | 30.00th=[12780], 40.00th=[13435], 50.00th=[13829], 60.00th=[15139], 00:30:20.456 | 70.00th=[21890], 80.00th=[24511], 90.00th=[25035], 95.00th=[31065], 00:30:20.456 | 99.00th=[35914], 99.50th=[36963], 99.90th=[39060], 99.95th=[39060], 00:30:20.456 | 99.99th=[39060] 00:30:20.456 bw ( KiB/s): min=13872, max=16384, per=23.39%, avg=15128.00, stdev=1776.25, samples=2 00:30:20.456 iops : min= 3468, max= 4096, avg=3782.00, stdev=444.06, samples=2 00:30:20.456 lat (msec) : 4=0.20%, 10=6.45%, 20=64.53%, 50=28.81%, 100=0.01% 00:30:20.456 cpu : usr=3.98%, sys=4.27%, ctx=361, majf=0, minf=1 00:30:20.456 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:20.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:20.456 issued rwts: total=3584,3910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.456 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:20.456 job3: (groupid=0, jobs=1): err= 0: pid=1181591: Fri Nov 15 12:52:00 2024 00:30:20.456 read: IOPS=4066, BW=15.9MiB/s (16.7MB/s)(15.9MiB/1003msec) 00:30:20.456 slat (usec): min=2, max=34157, avg=103.48, stdev=1056.40 00:30:20.456 clat (usec): min=893, max=58896, avg=16422.44, stdev=7919.56 00:30:20.456 lat (usec): min=3614, max=58901, avg=16525.92, stdev=7965.12 00:30:20.456 clat percentiles (usec): 00:30:20.456 | 1.00th=[ 5932], 5.00th=[ 6915], 10.00th=[ 9372], 20.00th=[11207], 00:30:20.456 | 30.00th=[11731], 40.00th=[12649], 50.00th=[13566], 60.00th=[14877], 00:30:20.456 | 70.00th=[16581], 80.00th=[23200], 90.00th=[28443], 95.00th=[30802], 00:30:20.456 | 99.00th=[40633], 99.50th=[43779], 99.90th=[44827], 99.95th=[47973], 00:30:20.456 | 99.99th=[58983] 00:30:20.456 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:30:20.456 slat (usec): min=3, max=13567, avg=103.10, stdev=790.70 00:30:20.456 clat (usec): min=975, max=41869, avg=14744.60, stdev=8080.97 00:30:20.456 lat (usec): min=1003, max=41876, avg=14847.70, stdev=8140.75 00:30:20.456 clat percentiles (usec): 00:30:20.456 | 1.00th=[ 3687], 5.00th=[ 7046], 10.00th=[ 8356], 20.00th=[10945], 00:30:20.456 | 30.00th=[11469], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 00:30:20.456 | 70.00th=[13566], 80.00th=[15664], 90.00th=[20579], 95.00th=[39584], 00:30:20.456 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:30:20.456 | 99.99th=[41681] 00:30:20.456 bw ( KiB/s): min=12288, max=20480, per=25.33%, avg=16384.00, stdev=5792.62, samples=2 00:30:20.456 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:30:20.456 lat (usec) : 1000=0.04% 00:30:20.456 lat (msec) : 4=0.65%, 10=13.86%, 20=67.93%, 50=17.50%, 100=0.02% 00:30:20.456 cpu : usr=2.40%, sys=4.09%, ctx=247, majf=0, minf=1 00:30:20.456 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:20.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:20.456 issued rwts: total=4079,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.456 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:20.456 00:30:20.456 Run status group 0 (all jobs): 00:30:20.456 READ: bw=59.4MiB/s (62.3MB/s), 9.91MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=59.9MiB (62.8MB), run=1003-1009msec 00:30:20.456 WRITE: bw=63.2MiB/s (66.2MB/s), 11.6MiB/s-20.8MiB/s (12.1MB/s-21.8MB/s), io=63.7MiB (66.8MB), run=1003-1009msec 00:30:20.456 00:30:20.456 Disk stats (read/write): 00:30:20.456 nvme0n1: ios=4146/4607, merge=0/0, ticks=13249/12540, in_queue=25789, util=86.57% 00:30:20.456 nvme0n2: ios=2476/2560, merge=0/0, ticks=19513/15707, in_queue=35220, util=86.48% 00:30:20.456 nvme0n3: ios=3111/3230, merge=0/0, ticks=51732/50326, in_queue=102058, util=97.70% 00:30:20.456 nvme0n4: ios=3072/3515, merge=0/0, ticks=48183/47329, in_queue=95512, util=89.66% 00:30:20.456 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:30:20.456 [global] 00:30:20.456 thread=1 00:30:20.456 invalidate=1 00:30:20.456 rw=randwrite 00:30:20.456 time_based=1 00:30:20.456 runtime=1 00:30:20.456 ioengine=libaio 00:30:20.456 direct=1 00:30:20.456 bs=4096 00:30:20.456 iodepth=128 00:30:20.456 norandommap=0 00:30:20.456 numjobs=1 00:30:20.456 00:30:20.456 verify_dump=1 00:30:20.456 verify_backlog=512 00:30:20.456 verify_state_save=0 00:30:20.456 do_verify=1 00:30:20.456 verify=crc32c-intel 00:30:20.456 [job0] 00:30:20.456 filename=/dev/nvme0n1 00:30:20.456 [job1] 00:30:20.456 filename=/dev/nvme0n2 00:30:20.456 [job2] 00:30:20.456 filename=/dev/nvme0n3 00:30:20.456 [job3] 00:30:20.456 filename=/dev/nvme0n4 00:30:20.456 Could not set queue depth (nvme0n1) 00:30:20.456 Could not set queue depth (nvme0n2) 00:30:20.456 Could not set queue depth (nvme0n3) 00:30:20.456 Could not set queue depth (nvme0n4) 00:30:20.456 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:20.456 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:20.456 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:20.456 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:20.456 fio-3.35 00:30:20.456 Starting 4 threads 00:30:21.831 00:30:21.831 job0: (groupid=0, jobs=1): err= 0: pid=1181817: Fri Nov 15 12:52:01 2024 00:30:21.831 read: IOPS=5402, BW=21.1MiB/s (22.1MB/s)(22.2MiB/1051msec) 00:30:21.831 slat (usec): min=2, max=9773, avg=87.70, stdev=677.30 00:30:21.831 clat (usec): min=5024, max=54711, avg=11582.77, stdev=4541.20 00:30:21.831 lat (usec): min=5031, max=54717, avg=11670.47, stdev=4578.44 00:30:21.831 clat percentiles (usec): 00:30:21.831 | 1.00th=[ 7046], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9503], 00:30:21.831 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10814], 00:30:21.831 | 70.00th=[11207], 80.00th=[12649], 90.00th=[16188], 95.00th=[17433], 00:30:21.831 | 99.00th=[20841], 99.50th=[52167], 99.90th=[53740], 99.95th=[54789], 00:30:21.831 | 99.99th=[54789] 00:30:21.831 write: IOPS=5845, BW=22.8MiB/s (23.9MB/s)(24.0MiB/1051msec); 0 zone resets 00:30:21.831 slat (usec): min=3, max=9460, avg=72.53, stdev=516.33 00:30:21.831 clat (usec): min=3236, max=62223, avg=10997.07, stdev=6033.74 00:30:21.831 lat (usec): min=3243, max=62230, avg=11069.60, stdev=6044.35 00:30:21.831 clat percentiles (usec): 00:30:21.831 | 1.00th=[ 4621], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 8225], 00:30:21.831 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10683], 60.00th=[11076], 00:30:21.831 | 70.00th=[11338], 80.00th=[11731], 90.00th=[14353], 95.00th=[14746], 00:30:21.831 | 99.00th=[56886], 99.50th=[59507], 99.90th=[61604], 99.95th=[62129], 00:30:21.831 | 99.99th=[62129] 00:30:21.831 bw ( KiB/s): min=23928, max=24576, per=37.31%, avg=24252.00, stdev=458.21, samples=2 00:30:21.831 iops : min= 5982, max= 6144, avg=6063.00, stdev=114.55, samples=2 00:30:21.831 lat (msec) : 4=0.14%, 10=37.24%, 20=61.27%, 50=0.28%, 100=1.07% 00:30:21.831 cpu : usr=6.86%, sys=9.62%, ctx=408, majf=0, minf=1 00:30:21.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:30:21.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:21.831 issued rwts: total=5678,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.831 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:21.831 job1: (groupid=0, jobs=1): err= 0: pid=1181818: Fri Nov 15 12:52:01 2024 00:30:21.831 read: IOPS=5367, BW=21.0MiB/s (22.0MB/s)(22.0MiB/1049msec) 00:30:21.831 slat (usec): min=2, max=10569, avg=90.24, stdev=734.81 00:30:21.831 clat (usec): min=5419, max=62478, avg=12891.80, stdev=6644.41 00:30:21.831 lat (usec): min=5438, max=62483, avg=12982.05, stdev=6677.70 00:30:21.831 clat percentiles (usec): 00:30:21.831 | 1.00th=[ 7373], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10290], 00:30:21.831 | 30.00th=[10552], 40.00th=[10683], 50.00th=[11076], 60.00th=[11338], 00:30:21.831 | 70.00th=[11731], 80.00th=[14615], 90.00th=[17433], 95.00th=[19792], 00:30:21.831 | 99.00th=[52167], 99.50th=[52691], 99.90th=[52691], 99.95th=[62653], 00:30:21.831 | 99.99th=[62653] 00:30:21.831 write: IOPS=5368, BW=21.0MiB/s (22.0MB/s)(22.0MiB/1049msec); 0 zone resets 00:30:21.831 slat (usec): min=3, max=9513, avg=77.48, stdev=555.93 00:30:21.831 clat (usec): min=792, max=21885, avg=10757.77, stdev=2805.85 00:30:21.831 lat (usec): min=816, max=21892, avg=10835.25, stdev=2835.16 00:30:21.831 clat percentiles (usec): 00:30:21.831 | 1.00th=[ 2933], 5.00th=[ 6128], 10.00th=[ 6849], 20.00th=[ 7504], 00:30:21.831 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11469], 60.00th=[11731], 00:30:21.831 | 70.00th=[11994], 80.00th=[12387], 90.00th=[14615], 95.00th=[15270], 00:30:21.831 | 99.00th=[16057], 99.50th=[16909], 99.90th=[21627], 99.95th=[21890], 00:30:21.831 | 99.99th=[21890] 00:30:21.831 bw ( KiB/s): min=20752, max=24304, per=34.66%, avg=22528.00, stdev=2511.64, samples=2 00:30:21.831 iops : min= 5188, max= 6076, avg=5632.00, stdev=627.91, samples=2 00:30:21.831 lat (usec) : 1000=0.02% 00:30:21.831 lat (msec) : 2=0.23%, 4=0.70%, 10=19.99%, 20=76.72%, 50=1.23% 00:30:21.831 lat (msec) : 100=1.12% 00:30:21.831 cpu : usr=6.01%, sys=10.21%, ctx=412, majf=0, minf=1 00:30:21.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:30:21.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:21.831 issued rwts: total=5630,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.831 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:21.831 job2: (groupid=0, jobs=1): err= 0: pid=1181819: Fri Nov 15 12:52:01 2024 00:30:21.831 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:30:21.831 slat (usec): min=3, max=25742, avg=249.43, stdev=1646.58 00:30:21.831 clat (usec): min=11807, max=83814, avg=31103.68, stdev=16478.56 00:30:21.831 lat (usec): min=11816, max=83819, avg=31353.12, stdev=16614.41 00:30:21.831 clat percentiles (usec): 00:30:21.831 | 1.00th=[13435], 5.00th=[14484], 10.00th=[14877], 20.00th=[15795], 00:30:21.831 | 30.00th=[16581], 40.00th=[17957], 50.00th=[25035], 60.00th=[33817], 00:30:21.831 | 70.00th=[43254], 80.00th=[45351], 90.00th=[54789], 95.00th=[61604], 00:30:21.831 | 99.00th=[69731], 99.50th=[71828], 99.90th=[76022], 99.95th=[76022], 00:30:21.831 | 99.99th=[83362] 00:30:21.831 write: IOPS=2215, BW=8863KiB/s (9076kB/s)(8916KiB/1006msec); 0 zone resets 00:30:21.831 slat (usec): min=4, max=31496, avg=210.11, stdev=1620.79 00:30:21.831 clat (usec): min=3916, max=83023, avg=28050.18, stdev=16333.00 00:30:21.831 lat (usec): min=10034, max=83042, avg=28260.28, stdev=16471.84 00:30:21.831 clat percentiles (usec): 00:30:21.831 | 1.00th=[13829], 5.00th=[14353], 10.00th=[14746], 20.00th=[15139], 00:30:21.831 | 30.00th=[15270], 40.00th=[16188], 50.00th=[21890], 60.00th=[26346], 00:30:21.831 | 70.00th=[32637], 80.00th=[45876], 90.00th=[53740], 95.00th=[62129], 00:30:21.831 | 99.00th=[76022], 99.50th=[76022], 99.90th=[76022], 99.95th=[81265], 00:30:21.831 | 99.99th=[83362] 00:30:21.831 bw ( KiB/s): min= 5240, max=11576, per=12.94%, avg=8408.00, stdev=4480.23, samples=2 00:30:21.831 iops : min= 1310, max= 2894, avg=2102.00, stdev=1120.06, samples=2 00:30:21.831 lat (msec) : 4=0.02%, 20=46.06%, 50=38.67%, 100=15.24% 00:30:21.831 cpu : usr=2.59%, sys=4.78%, ctx=162, majf=0, minf=1 00:30:21.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:30:21.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:21.831 issued rwts: total=2048,2229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.831 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:21.831 job3: (groupid=0, jobs=1): err= 0: pid=1181820: Fri Nov 15 12:52:01 2024 00:30:21.831 read: IOPS=2688, BW=10.5MiB/s (11.0MB/s)(10.6MiB/1006msec) 00:30:21.831 slat (usec): min=2, max=16248, avg=163.34, stdev=1143.90 00:30:21.831 clat (usec): min=2298, max=73150, avg=19992.36, stdev=7979.97 00:30:21.831 lat (usec): min=9562, max=73158, avg=20155.70, stdev=8071.73 00:30:21.831 clat percentiles (usec): 00:30:21.831 | 1.00th=[10683], 5.00th=[13566], 10.00th=[14353], 20.00th=[15139], 00:30:21.831 | 30.00th=[16057], 40.00th=[17171], 50.00th=[17957], 60.00th=[19268], 00:30:21.831 | 70.00th=[20317], 80.00th=[23200], 90.00th=[26084], 95.00th=[32637], 00:30:21.831 | 99.00th=[64750], 99.50th=[65799], 99.90th=[72877], 99.95th=[72877], 00:30:21.831 | 99.99th=[72877] 00:30:21.831 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:30:21.831 slat (usec): min=4, max=16298, avg=173.78, stdev=1059.67 00:30:21.831 clat (usec): min=1097, max=73159, avg=23946.95, stdev=14583.34 00:30:21.831 lat (usec): min=1127, max=73172, avg=24120.74, stdev=14690.32 00:30:21.831 clat percentiles (usec): 00:30:21.831 | 1.00th=[ 9372], 5.00th=[12387], 10.00th=[13173], 20.00th=[13829], 00:30:21.831 | 30.00th=[15533], 40.00th=[17695], 50.00th=[18482], 60.00th=[19006], 00:30:21.831 | 70.00th=[24511], 80.00th=[26346], 90.00th=[56886], 95.00th=[60031], 00:30:21.831 | 99.00th=[62129], 99.50th=[63701], 99.90th=[63701], 99.95th=[72877], 00:30:21.831 | 99.99th=[72877] 00:30:21.831 bw ( KiB/s): min=11880, max=12696, per=18.91%, avg=12288.00, stdev=577.00, samples=2 00:30:21.831 iops : min= 2970, max= 3174, avg=3072.00, stdev=144.25, samples=2 00:30:21.831 lat (msec) : 2=0.02%, 4=0.02%, 10=0.62%, 20=64.91%, 50=26.57% 00:30:21.831 lat (msec) : 100=7.86% 00:30:21.831 cpu : usr=3.68%, sys=5.17%, ctx=222, majf=0, minf=1 00:30:21.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:30:21.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:21.831 issued rwts: total=2705,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.831 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:21.831 00:30:21.831 Run status group 0 (all jobs): 00:30:21.831 READ: bw=59.7MiB/s (62.6MB/s), 8143KiB/s-21.1MiB/s (8339kB/s-22.1MB/s), io=62.7MiB (65.8MB), run=1006-1051msec 00:30:21.831 WRITE: bw=63.5MiB/s (66.6MB/s), 8863KiB/s-22.8MiB/s (9076kB/s-23.9MB/s), io=66.7MiB (69.9MB), run=1006-1051msec 00:30:21.831 00:30:21.831 Disk stats (read/write): 00:30:21.831 nvme0n1: ios=4904/5120, merge=0/0, ticks=51616/50389, in_queue=102005, util=87.17% 00:30:21.832 nvme0n2: ios=4631/4911, merge=0/0, ticks=53321/50240, in_queue=103561, util=89.95% 00:30:21.832 nvme0n3: ios=1802/2048, merge=0/0, ticks=27314/25178, in_queue=52492, util=93.55% 00:30:21.832 nvme0n4: ios=2105/2560, merge=0/0, ticks=40589/63631, in_queue=104220, util=95.39% 00:30:21.832 12:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:30:21.832 12:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1181960 00:30:21.832 12:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:30:21.832 12:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:30:21.832 [global] 00:30:21.832 thread=1 00:30:21.832 invalidate=1 00:30:21.832 rw=read 00:30:21.832 time_based=1 00:30:21.832 runtime=10 00:30:21.832 ioengine=libaio 00:30:21.832 direct=1 00:30:21.832 bs=4096 00:30:21.832 iodepth=1 00:30:21.832 norandommap=1 00:30:21.832 numjobs=1 00:30:21.832 00:30:21.832 [job0] 00:30:21.832 filename=/dev/nvme0n1 00:30:21.832 [job1] 00:30:21.832 filename=/dev/nvme0n2 00:30:21.832 [job2] 00:30:21.832 filename=/dev/nvme0n3 00:30:21.832 [job3] 00:30:21.832 filename=/dev/nvme0n4 00:30:21.832 Could not set queue depth (nvme0n1) 00:30:21.832 Could not set queue depth (nvme0n2) 00:30:21.832 Could not set queue depth (nvme0n3) 00:30:21.832 Could not set queue depth (nvme0n4) 00:30:22.089 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:22.089 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:22.089 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:22.089 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:22.089 fio-3.35 00:30:22.089 Starting 4 threads 00:30:25.363 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:30:25.363 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:30:25.363 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=299008, buflen=4096 00:30:25.363 fio: pid=1182051, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:25.363 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:25.363 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:30:25.363 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=46866432, buflen=4096 00:30:25.363 fio: pid=1182050, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:25.621 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:25.621 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:30:25.621 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=692224, buflen=4096 00:30:25.621 fio: pid=1182048, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:25.880 12:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:25.880 12:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:30:25.880 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=33964032, buflen=4096 00:30:25.880 fio: pid=1182049, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:25.880 00:30:25.880 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1182048: Fri Nov 15 12:52:06 2024 00:30:25.880 read: IOPS=47, BW=189KiB/s (194kB/s)(676KiB/3568msec) 00:30:25.880 slat (usec): min=5, max=13924, avg=143.66, stdev=1221.01 00:30:25.880 clat (usec): min=226, max=42018, avg=20822.75, stdev=20381.23 00:30:25.880 lat (usec): min=255, max=48999, avg=20967.15, stdev=20383.97 00:30:25.880 clat percentiles (usec): 00:30:25.880 | 1.00th=[ 243], 5.00th=[ 277], 10.00th=[ 363], 20.00th=[ 383], 00:30:25.880 | 30.00th=[ 404], 40.00th=[ 412], 50.00th=[40633], 60.00th=[41157], 00:30:25.880 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:25.880 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:25.880 | 99.99th=[42206] 00:30:25.880 bw ( KiB/s): min= 96, max= 432, per=0.85%, avg=177.33, stdev=134.00, samples=6 00:30:25.880 iops : min= 24, max= 108, avg=44.33, stdev=33.50, samples=6 00:30:25.880 lat (usec) : 250=2.94%, 500=45.88%, 750=0.59% 00:30:25.880 lat (msec) : 50=50.00% 00:30:25.880 cpu : usr=0.00%, sys=0.11%, ctx=172, majf=0, minf=2 00:30:25.880 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:25.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.880 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.880 issued rwts: total=170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.880 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:25.880 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1182049: Fri Nov 15 12:52:06 2024 00:30:25.880 read: IOPS=2151, BW=8606KiB/s (8813kB/s)(32.4MiB/3854msec) 00:30:25.880 slat (usec): min=5, max=31893, avg=15.48, stdev=372.18 00:30:25.880 clat (usec): min=196, max=42296, avg=443.73, stdev=2568.92 00:30:25.880 lat (usec): min=203, max=74015, avg=459.22, stdev=2686.80 00:30:25.880 clat percentiles (usec): 00:30:25.880 | 1.00th=[ 233], 5.00th=[ 251], 10.00th=[ 253], 20.00th=[ 258], 00:30:25.880 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:30:25.880 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[ 396], 00:30:25.880 | 99.00th=[ 457], 99.50th=[ 523], 99.90th=[42206], 99.95th=[42206], 00:30:25.880 | 99.99th=[42206] 00:30:25.880 bw ( KiB/s): min= 93, max=14856, per=45.66%, avg=9467.00, stdev=6435.36, samples=7 00:30:25.880 iops : min= 23, max= 3714, avg=2366.71, stdev=1608.90, samples=7 00:30:25.880 lat (usec) : 250=4.27%, 500=95.12%, 750=0.22% 00:30:25.880 lat (msec) : 50=0.39% 00:30:25.880 cpu : usr=1.38%, sys=3.40%, ctx=8296, majf=0, minf=2 00:30:25.880 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:25.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.880 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.880 issued rwts: total=8293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.880 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:25.880 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1182050: Fri Nov 15 12:52:06 2024 00:30:25.880 read: IOPS=3525, BW=13.8MiB/s (14.4MB/s)(44.7MiB/3246msec) 00:30:25.880 slat (nsec): min=5326, max=45328, avg=9531.15, stdev=4663.05 00:30:25.880 clat (usec): min=201, max=1630, avg=269.52, stdev=47.52 00:30:25.880 lat (usec): min=207, max=1635, avg=279.05, stdev=49.54 00:30:25.880 clat percentiles (usec): 00:30:25.880 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 227], 20.00th=[ 243], 00:30:25.880 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:30:25.880 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 359], 00:30:25.880 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 537], 99.95th=[ 832], 00:30:25.880 | 99.99th=[ 1565] 00:30:25.880 bw ( KiB/s): min=12128, max=15904, per=67.49%, avg=13992.00, stdev=1429.26, samples=6 00:30:25.880 iops : min= 3032, max= 3976, avg=3498.00, stdev=357.31, samples=6 00:30:25.880 lat (usec) : 250=26.46%, 500=73.39%, 750=0.07%, 1000=0.04% 00:30:25.880 lat (msec) : 2=0.03% 00:30:25.880 cpu : usr=2.25%, sys=5.27%, ctx=11443, majf=0, minf=1 00:30:25.880 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:25.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.880 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.880 issued rwts: total=11443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.880 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:25.880 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1182051: Fri Nov 15 12:52:06 2024 00:30:25.881 read: IOPS=24, BW=98.2KiB/s (101kB/s)(292KiB/2975msec) 00:30:25.881 slat (nsec): min=9215, max=34618, avg=19853.11, stdev=8499.15 00:30:25.881 clat (usec): min=401, max=41065, avg=40413.41, stdev=4748.46 00:30:25.881 lat (usec): min=426, max=41078, avg=40433.31, stdev=4747.85 00:30:25.881 clat percentiles (usec): 00:30:25.881 | 1.00th=[ 400], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:25.881 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:25.881 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:25.881 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:25.881 | 99.99th=[41157] 00:30:25.881 bw ( KiB/s): min= 96, max= 104, per=0.48%, avg=99.20, stdev= 4.38, samples=5 00:30:25.881 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:30:25.881 lat (usec) : 500=1.35% 00:30:25.881 lat (msec) : 50=97.30% 00:30:25.881 cpu : usr=0.00%, sys=0.10%, ctx=74, majf=0, minf=1 00:30:25.881 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:25.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.881 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.881 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.881 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:25.881 00:30:25.881 Run status group 0 (all jobs): 00:30:25.881 READ: bw=20.2MiB/s (21.2MB/s), 98.2KiB/s-13.8MiB/s (101kB/s-14.4MB/s), io=78.0MiB (81.8MB), run=2975-3854msec 00:30:25.881 00:30:25.881 Disk stats (read/write): 00:30:25.881 nvme0n1: ios=164/0, merge=0/0, ticks=3316/0, in_queue=3316, util=95.42% 00:30:25.881 nvme0n2: ios=8286/0, merge=0/0, ticks=3347/0, in_queue=3347, util=95.39% 00:30:25.881 nvme0n3: ios=10944/0, merge=0/0, ticks=2962/0, in_queue=2962, util=98.44% 00:30:25.881 nvme0n4: ios=70/0, merge=0/0, ticks=2829/0, in_queue=2829, util=96.75% 00:30:26.139 12:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:26.139 12:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:30:26.397 12:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:26.397 12:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:30:26.655 12:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:26.655 12:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:30:27.221 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:27.221 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:30:27.221 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:30:27.221 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1181960 00:30:27.221 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:30:27.221 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:27.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:27.479 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:27.479 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:30:27.479 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:27.479 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:27.479 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:27.479 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:27.479 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:30:27.479 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:30:27.479 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:30:27.479 nvmf hotplug test: fio failed as expected 00:30:27.479 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:27.737 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:30:27.737 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:30:27.737 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:30:27.737 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:30:27.737 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:30:27.737 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:27.737 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:30:27.737 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:27.737 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:30:27.737 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:27.737 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:27.737 rmmod nvme_tcp 00:30:27.737 rmmod nvme_fabrics 00:30:27.737 rmmod nvme_keyring 00:30:27.737 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:27.737 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:30:27.737 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:30:27.737 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1180058 ']' 00:30:27.737 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1180058 00:30:27.737 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1180058 ']' 00:30:27.737 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1180058 00:30:27.737 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:30:27.737 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.737 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1180058 00:30:27.737 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:27.737 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:27.737 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1180058' 00:30:27.737 killing process with pid 1180058 00:30:27.737 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1180058 00:30:27.737 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1180058 00:30:27.996 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:27.996 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:27.996 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:27.996 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:30:27.996 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:30:27.996 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:30:27.996 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:27.996 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:27.996 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:27.996 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.996 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.996 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:30.531 00:30:30.531 real 0m23.821s 00:30:30.531 user 1m7.686s 00:30:30.531 sys 0m10.206s 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:30.531 ************************************ 00:30:30.531 END TEST nvmf_fio_target 00:30:30.531 ************************************ 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:30.531 ************************************ 00:30:30.531 START TEST nvmf_bdevio 00:30:30.531 ************************************ 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:30.531 * Looking for test storage... 00:30:30.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:30.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.531 --rc genhtml_branch_coverage=1 00:30:30.531 --rc genhtml_function_coverage=1 00:30:30.531 --rc genhtml_legend=1 00:30:30.531 --rc geninfo_all_blocks=1 00:30:30.531 --rc geninfo_unexecuted_blocks=1 00:30:30.531 00:30:30.531 ' 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:30.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.531 --rc genhtml_branch_coverage=1 00:30:30.531 --rc genhtml_function_coverage=1 00:30:30.531 --rc genhtml_legend=1 00:30:30.531 --rc geninfo_all_blocks=1 00:30:30.531 --rc geninfo_unexecuted_blocks=1 00:30:30.531 00:30:30.531 ' 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:30.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.531 --rc genhtml_branch_coverage=1 00:30:30.531 --rc genhtml_function_coverage=1 00:30:30.531 --rc genhtml_legend=1 00:30:30.531 --rc geninfo_all_blocks=1 00:30:30.531 --rc geninfo_unexecuted_blocks=1 00:30:30.531 00:30:30.531 ' 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:30.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.531 --rc genhtml_branch_coverage=1 00:30:30.531 --rc genhtml_function_coverage=1 00:30:30.531 --rc genhtml_legend=1 00:30:30.531 --rc geninfo_all_blocks=1 00:30:30.531 --rc geninfo_unexecuted_blocks=1 00:30:30.531 00:30:30.531 ' 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.531 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:30:30.532 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:32.438 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.438 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:32.438 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:32.439 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:32.439 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:32.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:30:32.439 00:30:32.439 --- 10.0.0.2 ping statistics --- 00:30:32.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.439 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:32.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:30:32.439 00:30:32.439 --- 10.0.0.1 ping statistics --- 00:30:32.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.439 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1184782 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1184782 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1184782 ']' 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.439 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:32.698 [2024-11-15 12:52:12.803846] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:32.698 [2024-11-15 12:52:12.804906] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:30:32.698 [2024-11-15 12:52:12.804976] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.698 [2024-11-15 12:52:12.876527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:32.698 [2024-11-15 12:52:12.938006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.698 [2024-11-15 12:52:12.938064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.698 [2024-11-15 12:52:12.938077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.698 [2024-11-15 12:52:12.938088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.698 [2024-11-15 12:52:12.938098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.698 [2024-11-15 12:52:12.939699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:32.698 [2024-11-15 12:52:12.939760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:32.698 [2024-11-15 12:52:12.939827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:32.698 [2024-11-15 12:52:12.939830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:32.698 [2024-11-15 12:52:13.037528] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:32.698 [2024-11-15 12:52:13.037610] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:32.698 [2024-11-15 12:52:13.037781] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:32.698 [2024-11-15 12:52:13.038431] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:32.698 [2024-11-15 12:52:13.038636] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:32.957 [2024-11-15 12:52:13.092546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:32.957 Malloc0 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:32.957 [2024-11-15 12:52:13.168845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:32.957 { 00:30:32.957 "params": { 00:30:32.957 "name": "Nvme$subsystem", 00:30:32.957 "trtype": "$TEST_TRANSPORT", 00:30:32.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.957 "adrfam": "ipv4", 00:30:32.957 "trsvcid": "$NVMF_PORT", 00:30:32.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.957 "hdgst": ${hdgst:-false}, 00:30:32.957 "ddgst": ${ddgst:-false} 00:30:32.957 }, 00:30:32.957 "method": "bdev_nvme_attach_controller" 00:30:32.957 } 00:30:32.957 EOF 00:30:32.957 )") 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:30:32.957 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:32.957 "params": { 00:30:32.957 "name": "Nvme1", 00:30:32.957 "trtype": "tcp", 00:30:32.957 "traddr": "10.0.0.2", 00:30:32.957 "adrfam": "ipv4", 00:30:32.957 "trsvcid": "4420", 00:30:32.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:32.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:32.957 "hdgst": false, 00:30:32.957 "ddgst": false 00:30:32.957 }, 00:30:32.957 "method": "bdev_nvme_attach_controller" 00:30:32.957 }' 00:30:32.957 [2024-11-15 12:52:13.220362] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:30:32.957 [2024-11-15 12:52:13.220442] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184818 ] 00:30:32.957 [2024-11-15 12:52:13.289572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:33.215 [2024-11-15 12:52:13.351798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.216 [2024-11-15 12:52:13.351848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:33.216 [2024-11-15 12:52:13.351851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.474 I/O targets: 00:30:33.474 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:30:33.474 00:30:33.474 00:30:33.474 CUnit - A unit testing framework for C - Version 2.1-3 00:30:33.474 http://cunit.sourceforge.net/ 00:30:33.474 00:30:33.474 00:30:33.474 Suite: bdevio tests on: Nvme1n1 00:30:33.474 Test: blockdev write read block ...passed 00:30:33.474 Test: blockdev write zeroes read block ...passed 00:30:33.474 Test: blockdev write zeroes read no split ...passed 00:30:33.474 Test: blockdev write zeroes read split ...passed 00:30:33.474 Test: blockdev write zeroes read split partial ...passed 00:30:33.474 Test: blockdev reset ...[2024-11-15 12:52:13.717602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:33.474 [2024-11-15 12:52:13.717711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xada640 (9): Bad file descriptor 00:30:33.474 [2024-11-15 12:52:13.761946] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:30:33.474 passed 00:30:33.474 Test: blockdev write read 8 blocks ...passed 00:30:33.474 Test: blockdev write read size > 128k ...passed 00:30:33.474 Test: blockdev write read invalid size ...passed 00:30:33.732 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:33.732 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:33.732 Test: blockdev write read max offset ...passed 00:30:33.732 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:33.732 Test: blockdev writev readv 8 blocks ...passed 00:30:33.732 Test: blockdev writev readv 30 x 1block ...passed 00:30:33.732 Test: blockdev writev readv block ...passed 00:30:33.732 Test: blockdev writev readv size > 128k ...passed 00:30:33.732 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:33.732 Test: blockdev comparev and writev ...[2024-11-15 12:52:13.975510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.732 [2024-11-15 12:52:13.975547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.732 [2024-11-15 12:52:13.975572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.732 [2024-11-15 12:52:13.975590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:33.732 [2024-11-15 12:52:13.976006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.732 [2024-11-15 12:52:13.976033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:33.732 [2024-11-15 12:52:13.976056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.732 [2024-11-15 12:52:13.976073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:33.732 [2024-11-15 12:52:13.976473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.732 [2024-11-15 12:52:13.976497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:33.732 [2024-11-15 12:52:13.976519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.732 [2024-11-15 12:52:13.976536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:33.732 [2024-11-15 12:52:13.976929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.732 [2024-11-15 12:52:13.976961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:33.732 [2024-11-15 12:52:13.976984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:33.732 [2024-11-15 12:52:13.977000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:33.732 passed 00:30:33.732 Test: blockdev nvme passthru rw ...passed 00:30:33.732 Test: blockdev nvme passthru vendor specific ...[2024-11-15 12:52:14.059001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:33.732 [2024-11-15 12:52:14.059028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:33.732 [2024-11-15 12:52:14.059180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:33.732 [2024-11-15 12:52:14.059205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:33.732 [2024-11-15 12:52:14.059352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:33.732 [2024-11-15 12:52:14.059376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:33.732 [2024-11-15 12:52:14.059524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:33.732 [2024-11-15 12:52:14.059548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:33.732 passed 00:30:33.991 Test: blockdev nvme admin passthru ...passed 00:30:33.991 Test: blockdev copy ...passed 00:30:33.991 00:30:33.991 Run Summary: Type Total Ran Passed Failed Inactive 00:30:33.991 suites 1 1 n/a 0 0 00:30:33.991 tests 23 23 23 0 0 00:30:33.991 asserts 152 152 152 0 n/a 00:30:33.991 00:30:33.991 Elapsed time = 1.099 seconds 00:30:33.991 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:33.991 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.991 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:33.991 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.991 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:30:33.991 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:30:33.991 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:33.991 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:30:33.991 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.991 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:30:33.991 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.991 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.991 rmmod nvme_tcp 00:30:33.991 rmmod nvme_fabrics 00:30:34.249 rmmod nvme_keyring 00:30:34.249 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:34.249 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:30:34.249 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:30:34.249 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1184782 ']' 00:30:34.249 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1184782 00:30:34.249 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1184782 ']' 00:30:34.249 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1184782 00:30:34.249 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:30:34.249 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:34.249 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1184782 00:30:34.249 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:30:34.249 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:30:34.249 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1184782' 00:30:34.249 killing process with pid 1184782 00:30:34.249 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1184782 00:30:34.249 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1184782 00:30:34.508 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:34.508 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:34.508 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:34.508 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:30:34.508 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:30:34.508 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:34.508 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:30:34.508 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:34.508 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:34.508 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.508 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.508 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.412 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.412 00:30:36.412 real 0m6.305s 00:30:36.412 user 0m8.002s 00:30:36.412 sys 0m2.585s 00:30:36.412 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:36.412 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:36.412 ************************************ 00:30:36.412 END TEST nvmf_bdevio 00:30:36.412 ************************************ 00:30:36.412 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:36.412 00:30:36.412 real 3m55.289s 00:30:36.412 user 8m53.925s 00:30:36.412 sys 1m23.800s 00:30:36.412 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:36.412 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:36.412 ************************************ 00:30:36.412 END TEST nvmf_target_core_interrupt_mode 00:30:36.412 ************************************ 00:30:36.412 12:52:16 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:36.412 12:52:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:36.412 12:52:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:36.412 12:52:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:36.412 ************************************ 00:30:36.412 START TEST nvmf_interrupt 00:30:36.412 ************************************ 00:30:36.412 12:52:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:36.671 * Looking for test storage... 00:30:36.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.671 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:36.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.672 --rc genhtml_branch_coverage=1 00:30:36.672 --rc genhtml_function_coverage=1 00:30:36.672 --rc genhtml_legend=1 00:30:36.672 --rc geninfo_all_blocks=1 00:30:36.672 --rc geninfo_unexecuted_blocks=1 00:30:36.672 00:30:36.672 ' 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:36.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.672 --rc genhtml_branch_coverage=1 00:30:36.672 --rc genhtml_function_coverage=1 00:30:36.672 --rc genhtml_legend=1 00:30:36.672 --rc geninfo_all_blocks=1 00:30:36.672 --rc geninfo_unexecuted_blocks=1 00:30:36.672 00:30:36.672 ' 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:36.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.672 --rc genhtml_branch_coverage=1 00:30:36.672 --rc genhtml_function_coverage=1 00:30:36.672 --rc genhtml_legend=1 00:30:36.672 --rc geninfo_all_blocks=1 00:30:36.672 --rc geninfo_unexecuted_blocks=1 00:30:36.672 00:30:36.672 ' 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:36.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.672 --rc genhtml_branch_coverage=1 00:30:36.672 --rc genhtml_function_coverage=1 00:30:36.672 --rc genhtml_legend=1 00:30:36.672 --rc geninfo_all_blocks=1 00:30:36.672 --rc geninfo_unexecuted_blocks=1 00:30:36.672 00:30:36.672 ' 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.672 12:52:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:39.205 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:39.205 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:39.205 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:39.205 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:39.205 12:52:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:39.205 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:39.205 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:39.205 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:39.205 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:39.205 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:39.205 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:39.205 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:39.205 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:39.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:30:39.205 00:30:39.205 --- 10.0.0.2 ping statistics --- 00:30:39.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.205 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:30:39.205 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:39.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:30:39.205 00:30:39.205 --- 10.0.0.1 ping statistics --- 00:30:39.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.205 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:30:39.205 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1186908 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1186908 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1186908 ']' 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 [2024-11-15 12:52:19.153689] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:39.206 [2024-11-15 12:52:19.154889] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:30:39.206 [2024-11-15 12:52:19.154958] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.206 [2024-11-15 12:52:19.229369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:39.206 [2024-11-15 12:52:19.287830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.206 [2024-11-15 12:52:19.287891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.206 [2024-11-15 12:52:19.287905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.206 [2024-11-15 12:52:19.287916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.206 [2024-11-15 12:52:19.287925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.206 [2024-11-15 12:52:19.289308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.206 [2024-11-15 12:52:19.289317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.206 [2024-11-15 12:52:19.387420] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:39.206 [2024-11-15 12:52:19.387452] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:39.206 [2024-11-15 12:52:19.387674] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:30:39.206 5000+0 records in 00:30:39.206 5000+0 records out 00:30:39.206 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0151183 s, 677 MB/s 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 AIO0 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 [2024-11-15 12:52:19.489326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:39.206 [2024-11-15 12:52:19.517595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1186908 0 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1186908 0 idle 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1186908 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1186908 -w 256 00:30:39.206 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1186908 root 20 0 128.2g 47616 34944 S 6.2 0.1 0:00.28 reactor_0' 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1186908 root 20 0 128.2g 47616 34944 S 6.2 0.1 0:00.28 reactor_0 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1186908 1 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1186908 1 idle 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1186908 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1186908 -w 256 00:30:39.466 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1186913 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1186913 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1187069 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1186908 0 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1186908 0 busy 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1186908 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1186908 -w 256 00:30:39.725 12:52:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:39.725 12:52:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1186908 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.28 reactor_0' 00:30:39.725 12:52:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1186908 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.28 reactor_0 00:30:39.725 12:52:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:39.725 12:52:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:39.725 12:52:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:39.725 12:52:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:39.725 12:52:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:39.725 12:52:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:39.725 12:52:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:30:41.098 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:30:41.098 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:41.098 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1186908 -w 256 00:30:41.098 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:41.098 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1186908 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.56 reactor_0' 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1186908 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.56 reactor_0 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1186908 1 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1186908 1 busy 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1186908 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1186908 -w 256 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1186913 root 20 0 128.2g 48384 34944 R 93.3 0.1 0:01.30 reactor_1' 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1186913 root 20 0 128.2g 48384 34944 R 93.3 0.1 0:01.30 reactor_1 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:41.099 12:52:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1187069 00:30:51.071 Initializing NVMe Controllers 00:30:51.071 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:51.071 Controller IO queue size 256, less than required. 00:30:51.071 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:51.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:51.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:51.071 Initialization complete. Launching workers. 00:30:51.071 ======================================================== 00:30:51.071 Latency(us) 00:30:51.071 Device Information : IOPS MiB/s Average min max 00:30:51.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13732.00 53.64 18655.38 4423.11 22900.25 00:30:51.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13610.30 53.17 18823.02 4548.99 59198.13 00:30:51.071 ======================================================== 00:30:51.071 Total : 27342.30 106.81 18738.82 4423.11 59198.13 00:30:51.071 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1186908 0 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1186908 0 idle 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1186908 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1186908 -w 256 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1186908 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.23 reactor_0' 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1186908 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.23 reactor_0 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:51.071 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1186908 1 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1186908 1 idle 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1186908 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1186908 -w 256 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1186913 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1' 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1186913 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:51.072 12:52:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:30:52.448 12:52:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:52.448 12:52:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:52.448 12:52:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:52.448 12:52:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:52.448 12:52:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:52.448 12:52:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:30:52.448 12:52:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:52.448 12:52:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1186908 0 00:30:52.448 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1186908 0 idle 00:30:52.449 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1186908 00:30:52.449 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:52.449 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:52.449 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:52.449 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:52.449 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:52.449 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:52.449 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:52.449 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:52.449 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:52.449 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1186908 -w 256 00:30:52.449 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1186908 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.31 reactor_0' 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1186908 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.31 reactor_0 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1186908 1 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1186908 1 idle 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1186908 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1186908 -w 256 00:30:52.707 12:52:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1186913 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1' 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1186913 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:52.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:52.966 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:53.225 rmmod nvme_tcp 00:30:53.225 rmmod nvme_fabrics 00:30:53.225 rmmod nvme_keyring 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1186908 ']' 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1186908 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1186908 ']' 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1186908 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1186908 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1186908' 00:30:53.225 killing process with pid 1186908 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1186908 00:30:53.225 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1186908 00:30:53.484 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:53.484 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:53.484 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:53.484 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:30:53.484 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:30:53.484 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:53.484 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:30:53.484 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:53.484 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:53.484 12:52:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.484 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:53.484 12:52:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.389 12:52:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:55.389 00:30:55.389 real 0m18.958s 00:30:55.389 user 0m37.741s 00:30:55.389 sys 0m6.237s 00:30:55.389 12:52:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.389 12:52:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:55.389 ************************************ 00:30:55.389 END TEST nvmf_interrupt 00:30:55.389 ************************************ 00:30:55.389 00:30:55.389 real 25m2.270s 00:30:55.389 user 58m23.043s 00:30:55.389 sys 6m39.865s 00:30:55.389 12:52:35 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.389 12:52:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:55.389 ************************************ 00:30:55.389 END TEST nvmf_tcp 00:30:55.389 ************************************ 00:30:55.648 12:52:35 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:30:55.648 12:52:35 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:55.648 12:52:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:55.648 12:52:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:55.648 12:52:35 -- common/autotest_common.sh@10 -- # set +x 00:30:55.648 ************************************ 00:30:55.648 START TEST spdkcli_nvmf_tcp 00:30:55.648 ************************************ 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:55.648 * Looking for test storage... 00:30:55.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:55.648 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:55.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.649 --rc genhtml_branch_coverage=1 00:30:55.649 --rc genhtml_function_coverage=1 00:30:55.649 --rc genhtml_legend=1 00:30:55.649 --rc geninfo_all_blocks=1 00:30:55.649 --rc geninfo_unexecuted_blocks=1 00:30:55.649 00:30:55.649 ' 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:55.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.649 --rc genhtml_branch_coverage=1 00:30:55.649 --rc genhtml_function_coverage=1 00:30:55.649 --rc genhtml_legend=1 00:30:55.649 --rc geninfo_all_blocks=1 00:30:55.649 --rc geninfo_unexecuted_blocks=1 00:30:55.649 00:30:55.649 ' 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:55.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.649 --rc genhtml_branch_coverage=1 00:30:55.649 --rc genhtml_function_coverage=1 00:30:55.649 --rc genhtml_legend=1 00:30:55.649 --rc geninfo_all_blocks=1 00:30:55.649 --rc geninfo_unexecuted_blocks=1 00:30:55.649 00:30:55.649 ' 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:55.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.649 --rc genhtml_branch_coverage=1 00:30:55.649 --rc genhtml_function_coverage=1 00:30:55.649 --rc genhtml_legend=1 00:30:55.649 --rc geninfo_all_blocks=1 00:30:55.649 --rc geninfo_unexecuted_blocks=1 00:30:55.649 00:30:55.649 ' 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:55.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1189082 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1189082 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1189082 ']' 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.649 12:52:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:55.908 [2024-11-15 12:52:35.997194] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:30:55.908 [2024-11-15 12:52:35.997267] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189082 ] 00:30:55.908 [2024-11-15 12:52:36.062077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:55.908 [2024-11-15 12:52:36.121328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.908 [2024-11-15 12:52:36.121330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.908 12:52:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:55.908 12:52:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:30:55.908 12:52:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:55.908 12:52:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:55.908 12:52:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:56.166 12:52:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:56.166 12:52:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:56.166 12:52:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:56.166 12:52:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:56.166 12:52:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:56.166 12:52:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:56.166 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:56.166 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:56.166 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:56.166 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:56.166 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:56.166 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:56.166 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:56.166 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:56.166 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:56.166 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:56.166 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:56.166 ' 00:30:58.693 [2024-11-15 12:52:38.916199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.101 [2024-11-15 12:52:40.188599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:02.718 [2024-11-15 12:52:42.535867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:04.618 [2024-11-15 12:52:44.562123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:05.992 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:05.992 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:05.992 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:05.992 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:05.992 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:05.992 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:05.992 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:05.992 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:05.992 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:05.992 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:05.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:05.992 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:05.992 12:52:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:05.992 12:52:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:05.992 12:52:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:05.992 12:52:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:05.992 12:52:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:05.992 12:52:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:05.992 12:52:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:05.992 12:52:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:06.558 12:52:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:06.558 12:52:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:06.558 12:52:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:06.558 12:52:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:06.558 12:52:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:06.558 12:52:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:06.558 12:52:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:06.558 12:52:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:06.558 12:52:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:06.558 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:06.558 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:06.558 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:06.558 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:06.558 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:06.558 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:06.558 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:06.558 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:06.558 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:06.558 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:06.558 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:06.558 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:06.558 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:06.558 ' 00:31:11.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:11.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:11.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:11.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:11.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:11.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:11.820 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:11.820 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:11.820 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:11.820 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:11.820 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:11.820 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:11.820 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:11.820 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:12.078 12:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:12.078 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:12.078 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:12.078 12:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1189082 00:31:12.078 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1189082 ']' 00:31:12.078 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1189082 00:31:12.078 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:31:12.079 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:12.079 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1189082 00:31:12.079 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:12.079 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:12.079 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1189082' 00:31:12.079 killing process with pid 1189082 00:31:12.079 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1189082 00:31:12.079 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1189082 00:31:12.337 12:52:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:12.337 12:52:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:12.337 12:52:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1189082 ']' 00:31:12.337 12:52:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1189082 00:31:12.337 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1189082 ']' 00:31:12.337 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1189082 00:31:12.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1189082) - No such process 00:31:12.337 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1189082 is not found' 00:31:12.337 Process with pid 1189082 is not found 00:31:12.337 12:52:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:12.337 12:52:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:12.337 12:52:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:12.337 00:31:12.337 real 0m16.727s 00:31:12.337 user 0m35.774s 00:31:12.337 sys 0m0.750s 00:31:12.337 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:12.337 12:52:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:12.337 ************************************ 00:31:12.337 END TEST spdkcli_nvmf_tcp 00:31:12.337 ************************************ 00:31:12.337 12:52:52 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:12.337 12:52:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:12.337 12:52:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:12.337 12:52:52 -- common/autotest_common.sh@10 -- # set +x 00:31:12.337 ************************************ 00:31:12.337 START TEST nvmf_identify_passthru 00:31:12.337 ************************************ 00:31:12.337 12:52:52 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:12.337 * Looking for test storage... 00:31:12.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:12.337 12:52:52 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:12.337 12:52:52 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:31:12.337 12:52:52 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:12.596 12:52:52 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:12.596 12:52:52 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:31:12.596 12:52:52 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:12.596 12:52:52 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:12.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.596 --rc genhtml_branch_coverage=1 00:31:12.596 --rc genhtml_function_coverage=1 00:31:12.596 --rc genhtml_legend=1 00:31:12.596 --rc geninfo_all_blocks=1 00:31:12.596 --rc geninfo_unexecuted_blocks=1 00:31:12.596 00:31:12.596 ' 00:31:12.596 12:52:52 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:12.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.596 --rc genhtml_branch_coverage=1 00:31:12.596 --rc genhtml_function_coverage=1 00:31:12.596 --rc genhtml_legend=1 00:31:12.596 --rc geninfo_all_blocks=1 00:31:12.596 --rc geninfo_unexecuted_blocks=1 00:31:12.596 00:31:12.596 ' 00:31:12.596 12:52:52 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:12.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.596 --rc genhtml_branch_coverage=1 00:31:12.596 --rc genhtml_function_coverage=1 00:31:12.596 --rc genhtml_legend=1 00:31:12.596 --rc geninfo_all_blocks=1 00:31:12.596 --rc geninfo_unexecuted_blocks=1 00:31:12.596 00:31:12.596 ' 00:31:12.596 12:52:52 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:12.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.596 --rc genhtml_branch_coverage=1 00:31:12.596 --rc genhtml_function_coverage=1 00:31:12.596 --rc genhtml_legend=1 00:31:12.596 --rc geninfo_all_blocks=1 00:31:12.596 --rc geninfo_unexecuted_blocks=1 00:31:12.596 00:31:12.596 ' 00:31:12.596 12:52:52 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:12.596 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:12.597 12:52:52 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:12.597 12:52:52 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:12.597 12:52:52 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:12.597 12:52:52 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:12.597 12:52:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.597 12:52:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.597 12:52:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.597 12:52:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:12.597 12:52:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:12.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:12.597 12:52:52 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:12.597 12:52:52 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:12.597 12:52:52 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:12.597 12:52:52 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:12.597 12:52:52 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:12.597 12:52:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.597 12:52:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.597 12:52:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.597 12:52:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:12.597 12:52:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.597 12:52:52 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.597 12:52:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:12.597 12:52:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:12.597 12:52:52 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:31:12.597 12:52:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.126 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:15.127 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:15.127 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:15.127 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:15.127 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:15.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:31:15.127 00:31:15.127 --- 10.0.0.2 ping statistics --- 00:31:15.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.127 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:31:15.127 00:31:15.127 --- 10.0.0.1 ping statistics --- 00:31:15.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.127 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:15.127 12:52:54 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:15.127 12:52:55 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:15.127 12:52:55 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:15.127 12:52:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:15.127 12:52:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:15.127 12:52:55 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:15.127 12:52:55 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:31:15.127 12:52:55 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:15.127 12:52:55 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:15.127 12:52:55 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:15.127 12:52:55 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:31:15.127 12:52:55 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:15.127 12:52:55 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:15.127 12:52:55 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:15.127 12:52:55 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:15.127 12:52:55 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:31:15.127 12:52:55 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:31:15.127 12:52:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:31:15.127 12:52:55 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:31:15.127 12:52:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:31:15.127 12:52:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:15.127 12:52:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:19.307 12:52:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:31:19.307 12:52:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:31:19.307 12:52:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:19.307 12:52:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:23.489 12:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:23.489 12:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:23.489 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:23.489 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:23.489 12:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:23.489 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:23.489 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:23.489 12:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1193727 00:31:23.489 12:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:23.489 12:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:23.489 12:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1193727 00:31:23.489 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1193727 ']' 00:31:23.489 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.489 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:23.489 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.489 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:23.489 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:23.489 [2024-11-15 12:53:03.637525] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:31:23.489 [2024-11-15 12:53:03.637619] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.489 [2024-11-15 12:53:03.711489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:23.489 [2024-11-15 12:53:03.774181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.489 [2024-11-15 12:53:03.774235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:23.489 [2024-11-15 12:53:03.774249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.489 [2024-11-15 12:53:03.774260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.489 [2024-11-15 12:53:03.774269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.489 [2024-11-15 12:53:03.775895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.489 [2024-11-15 12:53:03.775958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:23.489 [2024-11-15 12:53:03.775994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:23.489 [2024-11-15 12:53:03.775996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.747 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:23.747 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:31:23.747 12:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:23.747 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.747 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:23.747 INFO: Log level set to 20 00:31:23.747 INFO: Requests: 00:31:23.747 { 00:31:23.747 "jsonrpc": "2.0", 00:31:23.747 "method": "nvmf_set_config", 00:31:23.747 "id": 1, 00:31:23.747 "params": { 00:31:23.747 "admin_cmd_passthru": { 00:31:23.747 "identify_ctrlr": true 00:31:23.747 } 00:31:23.747 } 00:31:23.747 } 00:31:23.747 00:31:23.747 INFO: response: 00:31:23.747 { 00:31:23.747 "jsonrpc": "2.0", 00:31:23.747 "id": 1, 00:31:23.747 "result": true 00:31:23.747 } 00:31:23.747 00:31:23.747 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.747 12:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:23.747 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.747 12:53:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:23.747 INFO: Setting log level to 20 00:31:23.747 INFO: Setting log level to 20 00:31:23.747 INFO: Log level set to 20 00:31:23.747 INFO: Log level set to 20 00:31:23.747 INFO: Requests: 00:31:23.747 { 00:31:23.747 "jsonrpc": "2.0", 00:31:23.747 "method": "framework_start_init", 00:31:23.747 "id": 1 00:31:23.747 } 00:31:23.747 00:31:23.747 INFO: Requests: 00:31:23.747 { 00:31:23.747 "jsonrpc": "2.0", 00:31:23.747 "method": "framework_start_init", 00:31:23.747 "id": 1 00:31:23.747 } 00:31:23.747 00:31:23.747 [2024-11-15 12:53:04.023759] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:23.747 INFO: response: 00:31:23.747 { 00:31:23.747 "jsonrpc": "2.0", 00:31:23.747 "id": 1, 00:31:23.747 "result": true 00:31:23.747 } 00:31:23.747 00:31:23.747 INFO: response: 00:31:23.747 { 00:31:23.747 "jsonrpc": "2.0", 00:31:23.747 "id": 1, 00:31:23.747 "result": true 00:31:23.747 } 00:31:23.747 00:31:23.747 12:53:04 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.747 12:53:04 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:23.747 12:53:04 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.747 12:53:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:23.747 INFO: Setting log level to 40 00:31:23.747 INFO: Setting log level to 40 00:31:23.747 INFO: Setting log level to 40 00:31:23.747 [2024-11-15 12:53:04.033900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.747 12:53:04 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.747 12:53:04 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:23.747 12:53:04 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:23.747 12:53:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:23.747 12:53:04 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:31:23.747 12:53:04 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.747 12:53:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:27.029 Nvme0n1 00:31:27.029 12:53:06 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.029 12:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:27.029 12:53:06 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.029 12:53:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:27.029 12:53:06 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.029 12:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:27.029 12:53:06 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.029 12:53:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:27.029 12:53:06 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.029 12:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:27.029 12:53:06 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.029 12:53:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:27.029 [2024-11-15 12:53:06.937628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.029 12:53:06 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.029 12:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:27.029 12:53:06 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.029 12:53:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:27.029 [ 00:31:27.029 { 00:31:27.029 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:27.029 "subtype": "Discovery", 00:31:27.029 "listen_addresses": [], 00:31:27.029 "allow_any_host": true, 00:31:27.029 "hosts": [] 00:31:27.029 }, 00:31:27.029 { 00:31:27.029 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:27.029 "subtype": "NVMe", 00:31:27.029 "listen_addresses": [ 00:31:27.029 { 00:31:27.029 "trtype": "TCP", 00:31:27.029 "adrfam": "IPv4", 00:31:27.029 "traddr": "10.0.0.2", 00:31:27.029 "trsvcid": "4420" 00:31:27.029 } 00:31:27.029 ], 00:31:27.029 "allow_any_host": true, 00:31:27.029 "hosts": [], 00:31:27.029 "serial_number": "SPDK00000000000001", 00:31:27.029 "model_number": "SPDK bdev Controller", 00:31:27.029 "max_namespaces": 1, 00:31:27.029 "min_cntlid": 1, 00:31:27.029 "max_cntlid": 65519, 00:31:27.029 "namespaces": [ 00:31:27.029 { 00:31:27.029 "nsid": 1, 00:31:27.029 "bdev_name": "Nvme0n1", 00:31:27.029 "name": "Nvme0n1", 00:31:27.029 "nguid": "9D4F202616074FA5A9099E246580C7DC", 00:31:27.029 "uuid": "9d4f2026-1607-4fa5-a909-9e246580c7dc" 00:31:27.029 } 00:31:27.029 ] 00:31:27.029 } 00:31:27.029 ] 00:31:27.029 12:53:06 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.029 12:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:27.029 12:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:27.029 12:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:27.029 12:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:31:27.030 12:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:27.030 12:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:27.030 12:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:27.030 12:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:27.030 12:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:31:27.030 12:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:27.030 12:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:27.030 12:53:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.030 12:53:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:27.030 12:53:07 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.030 12:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:27.030 12:53:07 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:27.030 12:53:07 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:27.030 12:53:07 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:31:27.030 12:53:07 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:27.030 12:53:07 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:31:27.030 12:53:07 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:27.030 12:53:07 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:27.030 rmmod nvme_tcp 00:31:27.030 rmmod nvme_fabrics 00:31:27.030 rmmod nvme_keyring 00:31:27.030 12:53:07 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:27.030 12:53:07 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:31:27.030 12:53:07 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:31:27.030 12:53:07 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1193727 ']' 00:31:27.030 12:53:07 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1193727 00:31:27.030 12:53:07 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1193727 ']' 00:31:27.030 12:53:07 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1193727 00:31:27.030 12:53:07 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:31:27.030 12:53:07 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:27.030 12:53:07 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1193727 00:31:27.288 12:53:07 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:27.288 12:53:07 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:27.288 12:53:07 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1193727' 00:31:27.288 killing process with pid 1193727 00:31:27.288 12:53:07 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1193727 00:31:27.288 12:53:07 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1193727 00:31:28.664 12:53:08 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:28.664 12:53:08 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:28.664 12:53:08 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:28.664 12:53:08 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:31:28.664 12:53:08 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:31:28.664 12:53:08 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:28.664 12:53:08 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:31:28.664 12:53:08 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:28.664 12:53:08 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:28.664 12:53:08 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.664 12:53:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:28.664 12:53:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.199 12:53:10 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:31.199 00:31:31.199 real 0m18.422s 00:31:31.199 user 0m26.513s 00:31:31.199 sys 0m3.275s 00:31:31.199 12:53:10 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:31.199 12:53:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:31.199 ************************************ 00:31:31.199 END TEST nvmf_identify_passthru 00:31:31.199 ************************************ 00:31:31.199 12:53:11 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:31.199 12:53:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:31.199 12:53:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:31.199 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:31:31.199 ************************************ 00:31:31.199 START TEST nvmf_dif 00:31:31.199 ************************************ 00:31:31.199 12:53:11 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:31.199 * Looking for test storage... 00:31:31.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:31.199 12:53:11 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:31.199 12:53:11 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:31:31.199 12:53:11 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:31.199 12:53:11 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:31.199 12:53:11 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:31:31.199 12:53:11 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:31.199 12:53:11 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:31.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.199 --rc genhtml_branch_coverage=1 00:31:31.199 --rc genhtml_function_coverage=1 00:31:31.199 --rc genhtml_legend=1 00:31:31.199 --rc geninfo_all_blocks=1 00:31:31.199 --rc geninfo_unexecuted_blocks=1 00:31:31.199 00:31:31.199 ' 00:31:31.199 12:53:11 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:31.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.199 --rc genhtml_branch_coverage=1 00:31:31.199 --rc genhtml_function_coverage=1 00:31:31.199 --rc genhtml_legend=1 00:31:31.199 --rc geninfo_all_blocks=1 00:31:31.199 --rc geninfo_unexecuted_blocks=1 00:31:31.199 00:31:31.199 ' 00:31:31.199 12:53:11 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:31.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.199 --rc genhtml_branch_coverage=1 00:31:31.199 --rc genhtml_function_coverage=1 00:31:31.199 --rc genhtml_legend=1 00:31:31.199 --rc geninfo_all_blocks=1 00:31:31.199 --rc geninfo_unexecuted_blocks=1 00:31:31.199 00:31:31.199 ' 00:31:31.199 12:53:11 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:31.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.199 --rc genhtml_branch_coverage=1 00:31:31.199 --rc genhtml_function_coverage=1 00:31:31.199 --rc genhtml_legend=1 00:31:31.199 --rc geninfo_all_blocks=1 00:31:31.199 --rc geninfo_unexecuted_blocks=1 00:31:31.199 00:31:31.199 ' 00:31:31.199 12:53:11 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:31.199 12:53:11 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:31.199 12:53:11 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:31.199 12:53:11 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:31.199 12:53:11 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:31.199 12:53:11 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:31.199 12:53:11 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:31.199 12:53:11 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.200 12:53:11 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:31:31.200 12:53:11 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.200 12:53:11 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.200 12:53:11 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.200 12:53:11 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.200 12:53:11 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.200 12:53:11 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.200 12:53:11 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:31.200 12:53:11 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:31.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:31.200 12:53:11 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:31.200 12:53:11 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:31.200 12:53:11 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:31.200 12:53:11 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:31.200 12:53:11 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.200 12:53:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:31.200 12:53:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:31.200 12:53:11 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:31:31.200 12:53:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:33.101 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:33.101 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:33.101 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:33.101 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:33.101 12:53:13 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:33.102 12:53:13 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:33.102 12:53:13 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:33.102 12:53:13 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:33.102 12:53:13 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:33.102 12:53:13 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:33.102 12:53:13 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:33.102 12:53:13 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:33.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:33.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:31:33.102 00:31:33.102 --- 10.0.0.2 ping statistics --- 00:31:33.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.102 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:31:33.102 12:53:13 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:33.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:33.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:31:33.102 00:31:33.102 --- 10.0.0.1 ping statistics --- 00:31:33.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.102 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:31:33.102 12:53:13 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:33.102 12:53:13 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:31:33.102 12:53:13 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:33.102 12:53:13 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:34.476 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:34.476 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:34.476 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:34.476 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:34.476 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:34.476 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:34.476 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:34.476 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:34.476 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:34.476 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:34.476 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:34.476 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:34.476 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:34.476 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:34.476 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:34.476 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:34.476 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:34.476 12:53:14 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:34.476 12:53:14 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:34.476 12:53:14 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:34.476 12:53:14 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:34.476 12:53:14 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:34.476 12:53:14 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:34.476 12:53:14 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:34.476 12:53:14 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:34.476 12:53:14 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:34.476 12:53:14 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:34.476 12:53:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:34.476 12:53:14 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1196992 00:31:34.476 12:53:14 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:34.476 12:53:14 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1196992 00:31:34.476 12:53:14 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1196992 ']' 00:31:34.476 12:53:14 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.476 12:53:14 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:34.476 12:53:14 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.476 12:53:14 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:34.476 12:53:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:34.476 [2024-11-15 12:53:14.782016] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:31:34.476 [2024-11-15 12:53:14.782084] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:34.734 [2024-11-15 12:53:14.850183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.734 [2024-11-15 12:53:14.903501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:34.734 [2024-11-15 12:53:14.903574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:34.734 [2024-11-15 12:53:14.903587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:34.734 [2024-11-15 12:53:14.903598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:34.734 [2024-11-15 12:53:14.903607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:34.734 [2024-11-15 12:53:14.904182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.734 12:53:15 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:34.734 12:53:15 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:31:34.734 12:53:15 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:34.734 12:53:15 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:34.734 12:53:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:34.992 12:53:15 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.992 12:53:15 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:34.992 12:53:15 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:34.992 12:53:15 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.992 12:53:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:34.992 [2024-11-15 12:53:15.093242] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.992 12:53:15 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.992 12:53:15 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:34.992 12:53:15 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:34.992 12:53:15 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:34.992 12:53:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:34.992 ************************************ 00:31:34.992 START TEST fio_dif_1_default 00:31:34.992 ************************************ 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:34.992 bdev_null0 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:34.992 [2024-11-15 12:53:15.149486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:34.992 { 00:31:34.992 "params": { 00:31:34.992 "name": "Nvme$subsystem", 00:31:34.992 "trtype": "$TEST_TRANSPORT", 00:31:34.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:34.992 "adrfam": "ipv4", 00:31:34.992 "trsvcid": "$NVMF_PORT", 00:31:34.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:34.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:34.992 "hdgst": ${hdgst:-false}, 00:31:34.992 "ddgst": ${ddgst:-false} 00:31:34.992 }, 00:31:34.992 "method": "bdev_nvme_attach_controller" 00:31:34.992 } 00:31:34.992 EOF 00:31:34.992 )") 00:31:34.992 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:34.993 "params": { 00:31:34.993 "name": "Nvme0", 00:31:34.993 "trtype": "tcp", 00:31:34.993 "traddr": "10.0.0.2", 00:31:34.993 "adrfam": "ipv4", 00:31:34.993 "trsvcid": "4420", 00:31:34.993 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:34.993 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:34.993 "hdgst": false, 00:31:34.993 "ddgst": false 00:31:34.993 }, 00:31:34.993 "method": "bdev_nvme_attach_controller" 00:31:34.993 }' 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:34.993 12:53:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:35.250 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:35.250 fio-3.35 00:31:35.250 Starting 1 thread 00:31:47.447 00:31:47.447 filename0: (groupid=0, jobs=1): err= 0: pid=1197220: Fri Nov 15 12:53:26 2024 00:31:47.447 read: IOPS=101, BW=407KiB/s (417kB/s)(4080KiB/10022msec) 00:31:47.447 slat (nsec): min=6640, max=51567, avg=8595.51, stdev=3111.92 00:31:47.447 clat (usec): min=611, max=45981, avg=39271.16, stdev=8195.49 00:31:47.447 lat (usec): min=618, max=45998, avg=39279.76, stdev=8195.59 00:31:47.447 clat percentiles (usec): 00:31:47.447 | 1.00th=[ 693], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:47.447 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:47.447 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:47.447 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:31:47.447 | 99.99th=[45876] 00:31:47.447 bw ( KiB/s): min= 384, max= 448, per=99.73%, avg=406.40, stdev=18.28, samples=20 00:31:47.447 iops : min= 96, max= 112, avg=101.60, stdev= 4.57, samples=20 00:31:47.447 lat (usec) : 750=3.04%, 1000=1.27% 00:31:47.447 lat (msec) : 50=95.69% 00:31:47.447 cpu : usr=90.93%, sys=8.77%, ctx=16, majf=0, minf=250 00:31:47.447 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:47.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.447 issued rwts: total=1020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.447 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:47.447 00:31:47.447 Run status group 0 (all jobs): 00:31:47.447 READ: bw=407KiB/s (417kB/s), 407KiB/s-407KiB/s (417kB/s-417kB/s), io=4080KiB (4178kB), run=10022-10022msec 00:31:47.447 12:53:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:47.447 12:53:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:47.447 12:53:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:47.447 12:53:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:47.447 12:53:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:47.447 12:53:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:47.447 12:53:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.447 12:53:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:47.447 12:53:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.447 12:53:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:47.447 12:53:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.447 12:53:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:47.447 12:53:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.447 00:31:47.447 real 0m11.383s 00:31:47.447 user 0m10.584s 00:31:47.447 sys 0m1.203s 00:31:47.447 12:53:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.447 12:53:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:47.447 ************************************ 00:31:47.447 END TEST fio_dif_1_default 00:31:47.447 ************************************ 00:31:47.447 12:53:26 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:47.447 12:53:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:47.447 12:53:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:47.447 12:53:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:47.447 ************************************ 00:31:47.447 START TEST fio_dif_1_multi_subsystems 00:31:47.448 ************************************ 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:47.448 bdev_null0 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:47.448 [2024-11-15 12:53:26.572088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:47.448 bdev_null1 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:47.448 { 00:31:47.448 "params": { 00:31:47.448 "name": "Nvme$subsystem", 00:31:47.448 "trtype": "$TEST_TRANSPORT", 00:31:47.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:47.448 "adrfam": "ipv4", 00:31:47.448 "trsvcid": "$NVMF_PORT", 00:31:47.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:47.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:47.448 "hdgst": ${hdgst:-false}, 00:31:47.448 "ddgst": ${ddgst:-false} 00:31:47.448 }, 00:31:47.448 "method": "bdev_nvme_attach_controller" 00:31:47.448 } 00:31:47.448 EOF 00:31:47.448 )") 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:47.448 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:47.449 { 00:31:47.449 "params": { 00:31:47.449 "name": "Nvme$subsystem", 00:31:47.449 "trtype": "$TEST_TRANSPORT", 00:31:47.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:47.449 "adrfam": "ipv4", 00:31:47.449 "trsvcid": "$NVMF_PORT", 00:31:47.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:47.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:47.449 "hdgst": ${hdgst:-false}, 00:31:47.449 "ddgst": ${ddgst:-false} 00:31:47.449 }, 00:31:47.449 "method": "bdev_nvme_attach_controller" 00:31:47.449 } 00:31:47.449 EOF 00:31:47.449 )") 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:47.449 "params": { 00:31:47.449 "name": "Nvme0", 00:31:47.449 "trtype": "tcp", 00:31:47.449 "traddr": "10.0.0.2", 00:31:47.449 "adrfam": "ipv4", 00:31:47.449 "trsvcid": "4420", 00:31:47.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:47.449 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:47.449 "hdgst": false, 00:31:47.449 "ddgst": false 00:31:47.449 }, 00:31:47.449 "method": "bdev_nvme_attach_controller" 00:31:47.449 },{ 00:31:47.449 "params": { 00:31:47.449 "name": "Nvme1", 00:31:47.449 "trtype": "tcp", 00:31:47.449 "traddr": "10.0.0.2", 00:31:47.449 "adrfam": "ipv4", 00:31:47.449 "trsvcid": "4420", 00:31:47.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:47.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:47.449 "hdgst": false, 00:31:47.449 "ddgst": false 00:31:47.449 }, 00:31:47.449 "method": "bdev_nvme_attach_controller" 00:31:47.449 }' 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:47.449 12:53:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:47.449 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:47.449 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:47.449 fio-3.35 00:31:47.449 Starting 2 threads 00:31:57.413 00:31:57.413 filename0: (groupid=0, jobs=1): err= 0: pid=1198623: Fri Nov 15 12:53:37 2024 00:31:57.413 read: IOPS=195, BW=783KiB/s (802kB/s)(7856KiB/10027msec) 00:31:57.413 slat (nsec): min=6941, max=36622, avg=9206.40, stdev=3680.52 00:31:57.413 clat (usec): min=490, max=42452, avg=20392.34, stdev=20404.91 00:31:57.413 lat (usec): min=497, max=42463, avg=20401.54, stdev=20404.03 00:31:57.413 clat percentiles (usec): 00:31:57.413 | 1.00th=[ 553], 5.00th=[ 570], 10.00th=[ 578], 20.00th=[ 586], 00:31:57.413 | 30.00th=[ 611], 40.00th=[ 635], 50.00th=[ 676], 60.00th=[41157], 00:31:57.413 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:57.413 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:57.413 | 99.99th=[42206] 00:31:57.413 bw ( KiB/s): min= 704, max= 864, per=50.79%, avg=784.00, stdev=36.71, samples=20 00:31:57.413 iops : min= 176, max= 216, avg=196.00, stdev= 9.18, samples=20 00:31:57.413 lat (usec) : 500=0.10%, 750=51.22%, 1000=0.20% 00:31:57.413 lat (msec) : 50=48.47% 00:31:57.413 cpu : usr=97.34%, sys=2.37%, ctx=15, majf=0, minf=66 00:31:57.413 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.413 issued rwts: total=1964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.413 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:57.413 filename1: (groupid=0, jobs=1): err= 0: pid=1198624: Fri Nov 15 12:53:37 2024 00:31:57.413 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10037msec) 00:31:57.413 slat (usec): min=7, max=100, avg=10.27, stdev= 4.52 00:31:57.413 clat (usec): min=516, max=41726, avg=21053.48, stdev=20362.72 00:31:57.413 lat (usec): min=525, max=41770, avg=21063.75, stdev=20361.85 00:31:57.413 clat percentiles (usec): 00:31:57.413 | 1.00th=[ 562], 5.00th=[ 578], 10.00th=[ 586], 20.00th=[ 603], 00:31:57.413 | 30.00th=[ 611], 40.00th=[ 627], 50.00th=[41157], 60.00th=[41157], 00:31:57.413 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:57.413 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:57.413 | 99.99th=[41681] 00:31:57.413 bw ( KiB/s): min= 672, max= 768, per=49.30%, avg=760.00, stdev=25.16, samples=20 00:31:57.413 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:31:57.413 lat (usec) : 750=49.79% 00:31:57.413 lat (msec) : 50=50.21% 00:31:57.413 cpu : usr=97.48%, sys=2.22%, ctx=13, majf=0, minf=203 00:31:57.413 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.414 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.414 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:57.414 00:31:57.414 Run status group 0 (all jobs): 00:31:57.414 READ: bw=1541KiB/s (1578kB/s), 759KiB/s-783KiB/s (777kB/s-802kB/s), io=15.1MiB (15.8MB), run=10027-10037msec 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.672 00:31:57.672 real 0m11.395s 00:31:57.672 user 0m20.876s 00:31:57.672 sys 0m0.792s 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:57.672 12:53:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:57.672 ************************************ 00:31:57.672 END TEST fio_dif_1_multi_subsystems 00:31:57.672 ************************************ 00:31:57.672 12:53:37 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:57.672 12:53:37 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:57.672 12:53:37 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:57.672 12:53:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:57.672 ************************************ 00:31:57.672 START TEST fio_dif_rand_params 00:31:57.672 ************************************ 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.672 bdev_null0 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.672 12:53:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.672 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.672 12:53:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:57.672 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.672 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.930 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.930 12:53:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:57.930 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.930 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.931 [2024-11-15 12:53:38.020030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:57.931 { 00:31:57.931 "params": { 00:31:57.931 "name": "Nvme$subsystem", 00:31:57.931 "trtype": "$TEST_TRANSPORT", 00:31:57.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:57.931 "adrfam": "ipv4", 00:31:57.931 "trsvcid": "$NVMF_PORT", 00:31:57.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:57.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:57.931 "hdgst": ${hdgst:-false}, 00:31:57.931 "ddgst": ${ddgst:-false} 00:31:57.931 }, 00:31:57.931 "method": "bdev_nvme_attach_controller" 00:31:57.931 } 00:31:57.931 EOF 00:31:57.931 )") 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:57.931 "params": { 00:31:57.931 "name": "Nvme0", 00:31:57.931 "trtype": "tcp", 00:31:57.931 "traddr": "10.0.0.2", 00:31:57.931 "adrfam": "ipv4", 00:31:57.931 "trsvcid": "4420", 00:31:57.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:57.931 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:57.931 "hdgst": false, 00:31:57.931 "ddgst": false 00:31:57.931 }, 00:31:57.931 "method": "bdev_nvme_attach_controller" 00:31:57.931 }' 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:57.931 12:53:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:58.189 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:58.189 ... 00:31:58.189 fio-3.35 00:31:58.189 Starting 3 threads 00:32:04.748 00:32:04.748 filename0: (groupid=0, jobs=1): err= 0: pid=1200027: Fri Nov 15 12:53:43 2024 00:32:04.748 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(145MiB/5045msec) 00:32:04.748 slat (nsec): min=4805, max=86372, avg=15512.35, stdev=5462.65 00:32:04.748 clat (usec): min=5073, max=53189, avg=13013.32, stdev=4525.08 00:32:04.748 lat (usec): min=5080, max=53202, avg=13028.83, stdev=4525.02 00:32:04.748 clat percentiles (usec): 00:32:04.748 | 1.00th=[ 7177], 5.00th=[ 8586], 10.00th=[10028], 20.00th=[10945], 00:32:04.748 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12256], 60.00th=[12911], 00:32:04.748 | 70.00th=[13829], 80.00th=[14746], 90.00th=[16057], 95.00th=[16712], 00:32:04.748 | 99.00th=[45351], 99.50th=[48497], 99.90th=[52167], 99.95th=[53216], 00:32:04.748 | 99.99th=[53216] 00:32:04.748 bw ( KiB/s): min=26880, max=32512, per=33.75%, avg=29593.60, stdev=1862.54, samples=10 00:32:04.748 iops : min= 210, max= 254, avg=231.20, stdev=14.55, samples=10 00:32:04.748 lat (msec) : 10=9.24%, 20=89.55%, 50=0.95%, 100=0.26% 00:32:04.748 cpu : usr=92.66%, sys=6.80%, ctx=10, majf=0, minf=152 00:32:04.748 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.749 issued rwts: total=1158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.749 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:04.749 filename0: (groupid=0, jobs=1): err= 0: pid=1200028: Fri Nov 15 12:53:43 2024 00:32:04.749 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(134MiB/5003msec) 00:32:04.749 slat (nsec): min=7283, max=85160, avg=14386.92, stdev=4649.80 00:32:04.749 clat (usec): min=5179, max=52911, avg=13966.66, stdev=6173.63 00:32:04.749 lat (usec): min=5187, max=52919, avg=13981.05, stdev=6173.24 00:32:04.749 clat percentiles (usec): 00:32:04.749 | 1.00th=[ 6128], 5.00th=[10028], 10.00th=[10683], 20.00th=[11207], 00:32:04.749 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12780], 60.00th=[13698], 00:32:04.749 | 70.00th=[14615], 80.00th=[15401], 90.00th=[16319], 95.00th=[16909], 00:32:04.749 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:32:04.749 | 99.99th=[52691] 00:32:04.749 bw ( KiB/s): min=13824, max=31744, per=31.27%, avg=27417.60, stdev=5113.52, samples=10 00:32:04.749 iops : min= 108, max= 248, avg=214.20, stdev=39.95, samples=10 00:32:04.749 lat (msec) : 10=4.19%, 20=93.29%, 50=0.84%, 100=1.68% 00:32:04.749 cpu : usr=93.56%, sys=5.92%, ctx=15, majf=0, minf=174 00:32:04.749 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.749 issued rwts: total=1073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.749 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:04.749 filename0: (groupid=0, jobs=1): err= 0: pid=1200029: Fri Nov 15 12:53:43 2024 00:32:04.749 read: IOPS=242, BW=30.4MiB/s (31.8MB/s)(153MiB/5043msec) 00:32:04.749 slat (nsec): min=4124, max=41065, avg=14233.48, stdev=4076.70 00:32:04.749 clat (usec): min=4873, max=53010, avg=12297.36, stdev=4865.75 00:32:04.749 lat (usec): min=4886, max=53023, avg=12311.59, stdev=4865.72 00:32:04.749 clat percentiles (usec): 00:32:04.749 | 1.00th=[ 6325], 5.00th=[ 8029], 10.00th=[ 9765], 20.00th=[10552], 00:32:04.749 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[12125], 00:32:04.749 | 70.00th=[12649], 80.00th=[13042], 90.00th=[14091], 95.00th=[15533], 00:32:04.749 | 99.00th=[49021], 99.50th=[51119], 99.90th=[52691], 99.95th=[53216], 00:32:04.749 | 99.99th=[53216] 00:32:04.749 bw ( KiB/s): min=27136, max=34048, per=35.71%, avg=31308.80, stdev=2189.10, samples=10 00:32:04.749 iops : min= 212, max= 266, avg=244.60, stdev=17.10, samples=10 00:32:04.749 lat (msec) : 10=12.00%, 20=86.61%, 50=0.49%, 100=0.90% 00:32:04.749 cpu : usr=91.99%, sys=7.46%, ctx=18, majf=0, minf=99 00:32:04.749 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.749 issued rwts: total=1225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.749 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:04.749 00:32:04.749 Run status group 0 (all jobs): 00:32:04.749 READ: bw=85.6MiB/s (89.8MB/s), 26.8MiB/s-30.4MiB/s (28.1MB/s-31.8MB/s), io=432MiB (453MB), run=5003-5045msec 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.749 bdev_null0 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.749 [2024-11-15 12:53:44.151318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.749 bdev_null1 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.749 bdev_null2 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:04.749 { 00:32:04.749 "params": { 00:32:04.749 "name": "Nvme$subsystem", 00:32:04.749 "trtype": "$TEST_TRANSPORT", 00:32:04.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:04.749 "adrfam": "ipv4", 00:32:04.749 "trsvcid": "$NVMF_PORT", 00:32:04.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:04.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:04.749 "hdgst": ${hdgst:-false}, 00:32:04.749 "ddgst": ${ddgst:-false} 00:32:04.749 }, 00:32:04.749 "method": "bdev_nvme_attach_controller" 00:32:04.749 } 00:32:04.749 EOF 00:32:04.749 )") 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:04.749 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:04.750 { 00:32:04.750 "params": { 00:32:04.750 "name": "Nvme$subsystem", 00:32:04.750 "trtype": "$TEST_TRANSPORT", 00:32:04.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:04.750 "adrfam": "ipv4", 00:32:04.750 "trsvcid": "$NVMF_PORT", 00:32:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:04.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:04.750 "hdgst": ${hdgst:-false}, 00:32:04.750 "ddgst": ${ddgst:-false} 00:32:04.750 }, 00:32:04.750 "method": "bdev_nvme_attach_controller" 00:32:04.750 } 00:32:04.750 EOF 00:32:04.750 )") 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:04.750 { 00:32:04.750 "params": { 00:32:04.750 "name": "Nvme$subsystem", 00:32:04.750 "trtype": "$TEST_TRANSPORT", 00:32:04.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:04.750 "adrfam": "ipv4", 00:32:04.750 "trsvcid": "$NVMF_PORT", 00:32:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:04.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:04.750 "hdgst": ${hdgst:-false}, 00:32:04.750 "ddgst": ${ddgst:-false} 00:32:04.750 }, 00:32:04.750 "method": "bdev_nvme_attach_controller" 00:32:04.750 } 00:32:04.750 EOF 00:32:04.750 )") 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:04.750 "params": { 00:32:04.750 "name": "Nvme0", 00:32:04.750 "trtype": "tcp", 00:32:04.750 "traddr": "10.0.0.2", 00:32:04.750 "adrfam": "ipv4", 00:32:04.750 "trsvcid": "4420", 00:32:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:04.750 "hdgst": false, 00:32:04.750 "ddgst": false 00:32:04.750 }, 00:32:04.750 "method": "bdev_nvme_attach_controller" 00:32:04.750 },{ 00:32:04.750 "params": { 00:32:04.750 "name": "Nvme1", 00:32:04.750 "trtype": "tcp", 00:32:04.750 "traddr": "10.0.0.2", 00:32:04.750 "adrfam": "ipv4", 00:32:04.750 "trsvcid": "4420", 00:32:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:04.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:04.750 "hdgst": false, 00:32:04.750 "ddgst": false 00:32:04.750 }, 00:32:04.750 "method": "bdev_nvme_attach_controller" 00:32:04.750 },{ 00:32:04.750 "params": { 00:32:04.750 "name": "Nvme2", 00:32:04.750 "trtype": "tcp", 00:32:04.750 "traddr": "10.0.0.2", 00:32:04.750 "adrfam": "ipv4", 00:32:04.750 "trsvcid": "4420", 00:32:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:04.750 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:04.750 "hdgst": false, 00:32:04.750 "ddgst": false 00:32:04.750 }, 00:32:04.750 "method": "bdev_nvme_attach_controller" 00:32:04.750 }' 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:04.750 12:53:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:04.750 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:04.750 ... 00:32:04.750 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:04.750 ... 00:32:04.750 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:04.750 ... 00:32:04.750 fio-3.35 00:32:04.750 Starting 24 threads 00:32:16.955 00:32:16.955 filename0: (groupid=0, jobs=1): err= 0: pid=1200886: Fri Nov 15 12:53:55 2024 00:32:16.955 read: IOPS=452, BW=1812KiB/s (1855kB/s)(17.8MiB/10031msec) 00:32:16.955 slat (nsec): min=8761, max=64579, avg=30474.89, stdev=8994.45 00:32:16.955 clat (usec): min=19325, max=47579, avg=35059.83, stdev=3621.15 00:32:16.955 lat (usec): min=19336, max=47624, avg=35090.31, stdev=3620.64 00:32:16.955 clat percentiles (usec): 00:32:16.955 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:16.955 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:16.955 | 70.00th=[34341], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:32:16.955 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:32:16.955 | 99.99th=[47449] 00:32:16.955 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1811.20, stdev=151.31, samples=20 00:32:16.955 iops : min= 352, max= 480, avg=452.80, stdev=37.83, samples=20 00:32:16.955 lat (msec) : 20=0.04%, 50=99.96% 00:32:16.955 cpu : usr=98.19%, sys=1.41%, ctx=15, majf=0, minf=22 00:32:16.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.955 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.955 filename0: (groupid=0, jobs=1): err= 0: pid=1200887: Fri Nov 15 12:53:55 2024 00:32:16.955 read: IOPS=454, BW=1818KiB/s (1862kB/s)(17.8MiB/10027msec) 00:32:16.955 slat (nsec): min=9176, max=95487, avg=34658.63, stdev=14807.27 00:32:16.955 clat (usec): min=15397, max=55722, avg=34948.37, stdev=3897.71 00:32:16.955 lat (usec): min=15455, max=55778, avg=34983.03, stdev=3895.63 00:32:16.955 clat percentiles (usec): 00:32:16.955 | 1.00th=[23462], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:16.955 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:16.955 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:32:16.955 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:32:16.955 | 99.99th=[55837] 00:32:16.955 bw ( KiB/s): min= 1408, max= 1923, per=4.18%, avg=1817.75, stdev=166.41, samples=20 00:32:16.955 iops : min= 352, max= 480, avg=454.40, stdev=41.58, samples=20 00:32:16.955 lat (msec) : 20=0.35%, 50=99.61%, 100=0.04% 00:32:16.955 cpu : usr=97.72%, sys=1.64%, ctx=73, majf=0, minf=34 00:32:16.955 IO depths : 1=0.4%, 2=6.6%, 4=24.9%, 8=55.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:32:16.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.955 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.955 issued rwts: total=4558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.955 filename0: (groupid=0, jobs=1): err= 0: pid=1200888: Fri Nov 15 12:53:55 2024 00:32:16.955 read: IOPS=452, BW=1810KiB/s (1853kB/s)(17.7MiB/10007msec) 00:32:16.955 slat (nsec): min=11840, max=76651, avg=33993.00, stdev=10701.85 00:32:16.955 clat (usec): min=22255, max=61511, avg=35049.72, stdev=3801.50 00:32:16.955 lat (usec): min=22298, max=61558, avg=35083.72, stdev=3801.10 00:32:16.955 clat percentiles (usec): 00:32:16.955 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:16.955 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:16.955 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43254], 95.00th=[43779], 00:32:16.955 | 99.00th=[44303], 99.50th=[44303], 99.90th=[54264], 99.95th=[54264], 00:32:16.955 | 99.99th=[61604] 00:32:16.955 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1804.95, stdev=170.59, samples=20 00:32:16.955 iops : min= 352, max= 480, avg=451.20, stdev=42.68, samples=20 00:32:16.955 lat (msec) : 50=99.65%, 100=0.35% 00:32:16.955 cpu : usr=98.10%, sys=1.40%, ctx=65, majf=0, minf=23 00:32:16.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.956 filename0: (groupid=0, jobs=1): err= 0: pid=1200889: Fri Nov 15 12:53:55 2024 00:32:16.956 read: IOPS=454, BW=1819KiB/s (1862kB/s)(17.8MiB/10029msec) 00:32:16.956 slat (usec): min=8, max=176, avg=27.26, stdev=16.84 00:32:16.956 clat (usec): min=17582, max=44464, avg=34951.30, stdev=3823.90 00:32:16.956 lat (usec): min=17603, max=44482, avg=34978.56, stdev=3819.41 00:32:16.956 clat percentiles (usec): 00:32:16.956 | 1.00th=[24773], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:32:16.956 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:16.956 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43779], 95.00th=[43779], 00:32:16.956 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:32:16.956 | 99.99th=[44303] 00:32:16.956 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1817.60, stdev=169.20, samples=20 00:32:16.956 iops : min= 352, max= 480, avg=454.40, stdev=42.30, samples=20 00:32:16.956 lat (msec) : 20=0.35%, 50=99.65% 00:32:16.956 cpu : usr=98.11%, sys=1.46%, ctx=20, majf=0, minf=46 00:32:16.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:16.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.956 filename0: (groupid=0, jobs=1): err= 0: pid=1200890: Fri Nov 15 12:53:55 2024 00:32:16.956 read: IOPS=454, BW=1819KiB/s (1862kB/s)(17.8MiB/10029msec) 00:32:16.956 slat (usec): min=12, max=113, avg=37.83, stdev=13.96 00:32:16.956 clat (usec): min=15987, max=44435, avg=34817.64, stdev=3778.65 00:32:16.956 lat (usec): min=16044, max=44469, avg=34855.47, stdev=3778.85 00:32:16.956 clat percentiles (usec): 00:32:16.956 | 1.00th=[25035], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:32:16.956 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:16.956 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43254], 95.00th=[43779], 00:32:16.956 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:32:16.956 | 99.99th=[44303] 00:32:16.956 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1817.60, stdev=169.20, samples=20 00:32:16.956 iops : min= 352, max= 480, avg=454.40, stdev=42.30, samples=20 00:32:16.956 lat (msec) : 20=0.35%, 50=99.65% 00:32:16.956 cpu : usr=98.23%, sys=1.33%, ctx=9, majf=0, minf=16 00:32:16.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.956 filename0: (groupid=0, jobs=1): err= 0: pid=1200891: Fri Nov 15 12:53:55 2024 00:32:16.956 read: IOPS=453, BW=1814KiB/s (1857kB/s)(17.8MiB/10022msec) 00:32:16.956 slat (nsec): min=9537, max=79613, avg=33129.37, stdev=10541.83 00:32:16.956 clat (usec): min=22318, max=48082, avg=35008.72, stdev=3631.87 00:32:16.956 lat (usec): min=22348, max=48119, avg=35041.85, stdev=3631.58 00:32:16.956 clat percentiles (usec): 00:32:16.956 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:16.956 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:16.956 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43254], 95.00th=[43779], 00:32:16.956 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:32:16.956 | 99.99th=[47973] 00:32:16.956 bw ( KiB/s): min= 1408, max= 1920, per=4.16%, avg=1808.55, stdev=166.14, samples=20 00:32:16.956 iops : min= 352, max= 480, avg=452.10, stdev=41.52, samples=20 00:32:16.956 lat (msec) : 50=100.00% 00:32:16.956 cpu : usr=97.18%, sys=1.83%, ctx=177, majf=0, minf=23 00:32:16.956 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:32:16.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.956 filename0: (groupid=0, jobs=1): err= 0: pid=1200892: Fri Nov 15 12:53:55 2024 00:32:16.956 read: IOPS=452, BW=1809KiB/s (1852kB/s)(17.7MiB/10010msec) 00:32:16.956 slat (nsec): min=8188, max=72749, avg=28609.80, stdev=10393.25 00:32:16.956 clat (usec): min=11581, max=92217, avg=35164.72, stdev=4455.19 00:32:16.956 lat (usec): min=11604, max=92261, avg=35193.33, stdev=4455.36 00:32:16.956 clat percentiles (usec): 00:32:16.956 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:32:16.956 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:16.956 | 70.00th=[34341], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:32:16.956 | 99.00th=[44303], 99.50th=[44303], 99.90th=[70779], 99.95th=[71828], 00:32:16.956 | 99.99th=[91751] 00:32:16.956 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1804.00, stdev=169.08, samples=20 00:32:16.956 iops : min= 352, max= 480, avg=451.00, stdev=42.27, samples=20 00:32:16.956 lat (msec) : 20=0.35%, 50=99.29%, 100=0.35% 00:32:16.956 cpu : usr=98.27%, sys=1.30%, ctx=14, majf=0, minf=26 00:32:16.956 IO depths : 1=0.3%, 2=6.5%, 4=25.0%, 8=56.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:32:16.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 issued rwts: total=4526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.956 filename0: (groupid=0, jobs=1): err= 0: pid=1200893: Fri Nov 15 12:53:55 2024 00:32:16.956 read: IOPS=452, BW=1810KiB/s (1853kB/s)(17.7MiB/10007msec) 00:32:16.956 slat (nsec): min=8373, max=76018, avg=33067.45, stdev=11087.62 00:32:16.956 clat (usec): min=22250, max=61662, avg=35050.56, stdev=3799.00 00:32:16.956 lat (usec): min=22272, max=61698, avg=35083.63, stdev=3798.69 00:32:16.956 clat percentiles (usec): 00:32:16.956 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:16.956 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:16.956 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43254], 95.00th=[43779], 00:32:16.956 | 99.00th=[44303], 99.50th=[44303], 99.90th=[54264], 99.95th=[54264], 00:32:16.956 | 99.99th=[61604] 00:32:16.956 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1804.95, stdev=170.59, samples=20 00:32:16.956 iops : min= 352, max= 480, avg=451.20, stdev=42.68, samples=20 00:32:16.956 lat (msec) : 50=99.65%, 100=0.35% 00:32:16.956 cpu : usr=96.80%, sys=1.91%, ctx=238, majf=0, minf=28 00:32:16.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.956 filename1: (groupid=0, jobs=1): err= 0: pid=1200894: Fri Nov 15 12:53:55 2024 00:32:16.956 read: IOPS=454, BW=1819KiB/s (1862kB/s)(17.8MiB/10029msec) 00:32:16.956 slat (nsec): min=8371, max=80302, avg=31142.22, stdev=13602.05 00:32:16.956 clat (usec): min=17785, max=44425, avg=34928.19, stdev=3796.23 00:32:16.956 lat (usec): min=17811, max=44447, avg=34959.33, stdev=3795.75 00:32:16.956 clat percentiles (usec): 00:32:16.956 | 1.00th=[25035], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:16.956 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:16.956 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:32:16.956 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:32:16.956 | 99.99th=[44303] 00:32:16.956 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1817.60, stdev=169.20, samples=20 00:32:16.956 iops : min= 352, max= 480, avg=454.40, stdev=42.30, samples=20 00:32:16.956 lat (msec) : 20=0.35%, 50=99.65% 00:32:16.956 cpu : usr=98.09%, sys=1.43%, ctx=23, majf=0, minf=50 00:32:16.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.956 filename1: (groupid=0, jobs=1): err= 0: pid=1200895: Fri Nov 15 12:53:55 2024 00:32:16.956 read: IOPS=453, BW=1814KiB/s (1857kB/s)(17.8MiB/10022msec) 00:32:16.956 slat (nsec): min=8353, max=76623, avg=28267.68, stdev=12101.88 00:32:16.956 clat (usec): min=22258, max=44419, avg=35060.68, stdev=3621.95 00:32:16.956 lat (usec): min=22278, max=44438, avg=35088.95, stdev=3620.14 00:32:16.956 clat percentiles (usec): 00:32:16.956 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:32:16.956 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:16.956 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:32:16.956 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:32:16.956 | 99.99th=[44303] 00:32:16.956 bw ( KiB/s): min= 1408, max= 1920, per=4.16%, avg=1808.55, stdev=166.14, samples=20 00:32:16.956 iops : min= 352, max= 480, avg=452.10, stdev=41.52, samples=20 00:32:16.956 lat (msec) : 50=100.00% 00:32:16.956 cpu : usr=98.24%, sys=1.34%, ctx=17, majf=0, minf=28 00:32:16.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:16.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.956 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.956 filename1: (groupid=0, jobs=1): err= 0: pid=1200896: Fri Nov 15 12:53:55 2024 00:32:16.956 read: IOPS=451, BW=1808KiB/s (1851kB/s)(17.7MiB/10009msec) 00:32:16.956 slat (usec): min=10, max=116, avg=40.25, stdev=15.58 00:32:16.956 clat (usec): min=18899, max=80080, avg=35040.11, stdev=4085.77 00:32:16.956 lat (usec): min=18919, max=80101, avg=35080.36, stdev=4083.51 00:32:16.956 clat percentiles (usec): 00:32:16.957 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:32:16.957 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:16.957 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43254], 95.00th=[43779], 00:32:16.957 | 99.00th=[44303], 99.50th=[44303], 99.90th=[60031], 99.95th=[60031], 00:32:16.957 | 99.99th=[80217] 00:32:16.957 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1803.20, stdev=153.74, samples=20 00:32:16.957 iops : min= 352, max= 480, avg=450.80, stdev=38.43, samples=20 00:32:16.957 lat (msec) : 20=0.35%, 50=99.29%, 100=0.35% 00:32:16.957 cpu : usr=98.26%, sys=1.33%, ctx=14, majf=0, minf=20 00:32:16.957 IO depths : 1=5.6%, 2=11.8%, 4=24.9%, 8=50.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:32:16.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.957 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.957 issued rwts: total=4524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.957 filename1: (groupid=0, jobs=1): err= 0: pid=1200897: Fri Nov 15 12:53:55 2024 00:32:16.957 read: IOPS=454, BW=1819KiB/s (1863kB/s)(17.8MiB/10027msec) 00:32:16.957 slat (usec): min=11, max=101, avg=35.84, stdev=12.98 00:32:16.957 clat (usec): min=15397, max=44429, avg=34884.82, stdev=3816.77 00:32:16.957 lat (usec): min=15455, max=44455, avg=34920.66, stdev=3814.84 00:32:16.957 clat percentiles (usec): 00:32:16.957 | 1.00th=[23462], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:16.957 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:16.957 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:32:16.957 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:32:16.957 | 99.99th=[44303] 00:32:16.957 bw ( KiB/s): min= 1408, max= 1923, per=4.18%, avg=1817.75, stdev=169.30, samples=20 00:32:16.957 iops : min= 352, max= 480, avg=454.40, stdev=42.30, samples=20 00:32:16.957 lat (msec) : 20=0.35%, 50=99.65% 00:32:16.957 cpu : usr=98.23%, sys=1.35%, ctx=19, majf=0, minf=27 00:32:16.957 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.957 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.957 filename1: (groupid=0, jobs=1): err= 0: pid=1200898: Fri Nov 15 12:53:55 2024 00:32:16.957 read: IOPS=455, BW=1820KiB/s (1864kB/s)(17.8MiB/10020msec) 00:32:16.957 slat (usec): min=7, max=116, avg=30.07, stdev=19.45 00:32:16.957 clat (usec): min=14130, max=44476, avg=34910.73, stdev=3945.28 00:32:16.957 lat (usec): min=14186, max=44497, avg=34940.81, stdev=3939.05 00:32:16.957 clat percentiles (usec): 00:32:16.957 | 1.00th=[21627], 5.00th=[32637], 10.00th=[33162], 20.00th=[33162], 00:32:16.957 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:16.957 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:32:16.957 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:32:16.957 | 99.99th=[44303] 00:32:16.957 bw ( KiB/s): min= 1408, max= 1923, per=4.18%, avg=1817.75, stdev=169.30, samples=20 00:32:16.957 iops : min= 352, max= 480, avg=454.40, stdev=42.30, samples=20 00:32:16.957 lat (msec) : 20=0.35%, 50=99.65% 00:32:16.957 cpu : usr=98.05%, sys=1.52%, ctx=19, majf=0, minf=18 00:32:16.957 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.957 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.957 filename1: (groupid=0, jobs=1): err= 0: pid=1200899: Fri Nov 15 12:53:55 2024 00:32:16.957 read: IOPS=452, BW=1810KiB/s (1853kB/s)(17.7MiB/10008msec) 00:32:16.957 slat (nsec): min=8619, max=78610, avg=32212.93, stdev=11171.36 00:32:16.957 clat (usec): min=22278, max=55097, avg=35057.22, stdev=3797.08 00:32:16.957 lat (usec): min=22300, max=55131, avg=35089.44, stdev=3796.47 00:32:16.957 clat percentiles (usec): 00:32:16.957 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:16.957 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:16.957 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:32:16.957 | 99.00th=[44303], 99.50th=[44303], 99.90th=[54789], 99.95th=[55313], 00:32:16.957 | 99.99th=[55313] 00:32:16.957 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1804.80, stdev=170.72, samples=20 00:32:16.957 iops : min= 352, max= 480, avg=451.20, stdev=42.68, samples=20 00:32:16.957 lat (msec) : 50=99.65%, 100=0.35% 00:32:16.957 cpu : usr=98.36%, sys=1.23%, ctx=16, majf=0, minf=27 00:32:16.957 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:16.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.957 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.957 filename1: (groupid=0, jobs=1): err= 0: pid=1200900: Fri Nov 15 12:53:55 2024 00:32:16.957 read: IOPS=452, BW=1810KiB/s (1853kB/s)(17.7MiB/10009msec) 00:32:16.957 slat (nsec): min=10819, max=94036, avg=37158.67, stdev=11888.36 00:32:16.957 clat (usec): min=18886, max=60434, avg=35020.99, stdev=3947.86 00:32:16.957 lat (usec): min=18909, max=60474, avg=35058.15, stdev=3947.23 00:32:16.957 clat percentiles (usec): 00:32:16.957 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:32:16.957 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:16.957 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43254], 95.00th=[43779], 00:32:16.957 | 99.00th=[44303], 99.50th=[44303], 99.90th=[60556], 99.95th=[60556], 00:32:16.957 | 99.99th=[60556] 00:32:16.957 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1804.80, stdev=154.83, samples=20 00:32:16.957 iops : min= 352, max= 480, avg=451.20, stdev=38.71, samples=20 00:32:16.957 lat (msec) : 20=0.35%, 50=99.29%, 100=0.35% 00:32:16.957 cpu : usr=98.41%, sys=1.17%, ctx=13, majf=0, minf=21 00:32:16.957 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:16.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.957 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.957 filename1: (groupid=0, jobs=1): err= 0: pid=1200901: Fri Nov 15 12:53:55 2024 00:32:16.957 read: IOPS=452, BW=1810KiB/s (1853kB/s)(17.7MiB/10009msec) 00:32:16.957 slat (usec): min=10, max=108, avg=37.05, stdev=13.79 00:32:16.957 clat (usec): min=18769, max=60051, avg=34998.02, stdev=3933.14 00:32:16.957 lat (usec): min=18791, max=60091, avg=35035.07, stdev=3933.41 00:32:16.957 clat percentiles (usec): 00:32:16.957 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:32:16.957 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:16.957 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43254], 95.00th=[43779], 00:32:16.957 | 99.00th=[43779], 99.50th=[44303], 99.90th=[60031], 99.95th=[60031], 00:32:16.957 | 99.99th=[60031] 00:32:16.957 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1804.80, stdev=154.83, samples=20 00:32:16.957 iops : min= 352, max= 480, avg=451.20, stdev=38.71, samples=20 00:32:16.957 lat (msec) : 20=0.35%, 50=99.29%, 100=0.35% 00:32:16.957 cpu : usr=98.39%, sys=1.16%, ctx=14, majf=0, minf=24 00:32:16.957 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:16.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.957 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.957 filename2: (groupid=0, jobs=1): err= 0: pid=1200902: Fri Nov 15 12:53:55 2024 00:32:16.957 read: IOPS=453, BW=1813KiB/s (1856kB/s)(17.8MiB/10028msec) 00:32:16.957 slat (usec): min=5, max=128, avg=30.58, stdev= 9.13 00:32:16.957 clat (usec): min=19325, max=45623, avg=35035.08, stdev=3589.28 00:32:16.957 lat (usec): min=19335, max=45644, avg=35065.65, stdev=3589.47 00:32:16.957 clat percentiles (usec): 00:32:16.957 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:16.957 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:16.957 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:32:16.957 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:32:16.957 | 99.99th=[45876] 00:32:16.957 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1805.47, stdev=170.10, samples=19 00:32:16.957 iops : min= 352, max= 480, avg=451.37, stdev=42.53, samples=19 00:32:16.957 lat (msec) : 20=0.04%, 50=99.96% 00:32:16.957 cpu : usr=96.88%, sys=1.95%, ctx=259, majf=0, minf=19 00:32:16.957 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.958 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.958 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.958 filename2: (groupid=0, jobs=1): err= 0: pid=1200903: Fri Nov 15 12:53:55 2024 00:32:16.958 read: IOPS=454, BW=1819KiB/s (1863kB/s)(17.8MiB/10027msec) 00:32:16.958 slat (usec): min=8, max=118, avg=33.59, stdev=18.63 00:32:16.958 clat (usec): min=15792, max=44617, avg=34900.85, stdev=3834.76 00:32:16.958 lat (usec): min=15820, max=44678, avg=34934.44, stdev=3832.12 00:32:16.958 clat percentiles (usec): 00:32:16.958 | 1.00th=[23987], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:32:16.958 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:16.958 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:32:16.958 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:32:16.958 | 99.99th=[44827] 00:32:16.958 bw ( KiB/s): min= 1408, max= 1923, per=4.18%, avg=1817.75, stdev=169.30, samples=20 00:32:16.958 iops : min= 352, max= 480, avg=454.40, stdev=42.30, samples=20 00:32:16.958 lat (msec) : 20=0.35%, 50=99.65% 00:32:16.958 cpu : usr=98.31%, sys=1.23%, ctx=26, majf=0, minf=43 00:32:16.958 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.958 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.958 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.958 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.958 filename2: (groupid=0, jobs=1): err= 0: pid=1200904: Fri Nov 15 12:53:55 2024 00:32:16.958 read: IOPS=452, BW=1809KiB/s (1853kB/s)(17.7MiB/10010msec) 00:32:16.958 slat (nsec): min=11040, max=85571, avg=36114.52, stdev=11084.01 00:32:16.958 clat (usec): min=18873, max=60826, avg=35036.26, stdev=3950.48 00:32:16.958 lat (usec): min=18901, max=60870, avg=35072.38, stdev=3950.25 00:32:16.958 clat percentiles (usec): 00:32:16.958 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:32:16.958 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:16.958 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43254], 95.00th=[43779], 00:32:16.958 | 99.00th=[44303], 99.50th=[44303], 99.90th=[60556], 99.95th=[60556], 00:32:16.958 | 99.99th=[61080] 00:32:16.958 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1804.80, stdev=154.83, samples=20 00:32:16.958 iops : min= 352, max= 480, avg=451.20, stdev=38.71, samples=20 00:32:16.958 lat (msec) : 20=0.35%, 50=99.29%, 100=0.35% 00:32:16.958 cpu : usr=97.65%, sys=1.70%, ctx=75, majf=0, minf=23 00:32:16.958 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:16.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.958 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.958 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.958 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.958 filename2: (groupid=0, jobs=1): err= 0: pid=1200905: Fri Nov 15 12:53:55 2024 00:32:16.958 read: IOPS=454, BW=1819KiB/s (1862kB/s)(17.8MiB/10029msec) 00:32:16.958 slat (nsec): min=12699, max=74086, avg=35839.63, stdev=9676.45 00:32:16.958 clat (usec): min=15962, max=44421, avg=34872.21, stdev=3786.48 00:32:16.958 lat (usec): min=15985, max=44444, avg=34908.05, stdev=3786.20 00:32:16.958 clat percentiles (usec): 00:32:16.958 | 1.00th=[25035], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:32:16.958 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:16.958 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43254], 95.00th=[43779], 00:32:16.958 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:32:16.958 | 99.99th=[44303] 00:32:16.958 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1817.60, stdev=169.20, samples=20 00:32:16.958 iops : min= 352, max= 480, avg=454.40, stdev=42.30, samples=20 00:32:16.958 lat (msec) : 20=0.35%, 50=99.65% 00:32:16.958 cpu : usr=96.84%, sys=2.06%, ctx=184, majf=0, minf=35 00:32:16.958 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.958 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.958 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.958 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.958 filename2: (groupid=0, jobs=1): err= 0: pid=1200906: Fri Nov 15 12:53:55 2024 00:32:16.958 read: IOPS=455, BW=1823KiB/s (1867kB/s)(17.8MiB/10005msec) 00:32:16.958 slat (usec): min=8, max=109, avg=18.49, stdev=11.78 00:32:16.958 clat (usec): min=13551, max=44470, avg=34951.60, stdev=4013.18 00:32:16.958 lat (usec): min=13586, max=44490, avg=34970.08, stdev=4010.52 00:32:16.958 clat percentiles (usec): 00:32:16.958 | 1.00th=[21365], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:32:16.958 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:16.958 | 70.00th=[34341], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:32:16.958 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:32:16.958 | 99.99th=[44303] 00:32:16.958 bw ( KiB/s): min= 1408, max= 1923, per=4.18%, avg=1817.75, stdev=169.30, samples=20 00:32:16.958 iops : min= 352, max= 480, avg=454.40, stdev=42.30, samples=20 00:32:16.958 lat (msec) : 20=0.70%, 50=99.30% 00:32:16.958 cpu : usr=98.26%, sys=1.33%, ctx=12, majf=0, minf=23 00:32:16.958 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.958 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.958 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.958 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.958 filename2: (groupid=0, jobs=1): err= 0: pid=1200907: Fri Nov 15 12:53:55 2024 00:32:16.958 read: IOPS=452, BW=1810KiB/s (1853kB/s)(17.7MiB/10008msec) 00:32:16.958 slat (nsec): min=8191, max=73708, avg=30080.94, stdev=10988.21 00:32:16.958 clat (usec): min=22259, max=54903, avg=35104.17, stdev=3793.53 00:32:16.958 lat (usec): min=22296, max=54932, avg=35134.25, stdev=3793.63 00:32:16.958 clat percentiles (usec): 00:32:16.958 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:16.958 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:16.958 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:32:16.958 | 99.00th=[44303], 99.50th=[44303], 99.90th=[54789], 99.95th=[54789], 00:32:16.958 | 99.99th=[54789] 00:32:16.958 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1804.80, stdev=170.72, samples=20 00:32:16.958 iops : min= 352, max= 480, avg=451.20, stdev=42.68, samples=20 00:32:16.958 lat (msec) : 50=99.65%, 100=0.35% 00:32:16.958 cpu : usr=98.36%, sys=1.22%, ctx=14, majf=0, minf=27 00:32:16.958 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.958 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.958 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.958 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.958 filename2: (groupid=0, jobs=1): err= 0: pid=1200908: Fri Nov 15 12:53:55 2024 00:32:16.958 read: IOPS=452, BW=1812KiB/s (1855kB/s)(17.8MiB/10031msec) 00:32:16.958 slat (usec): min=8, max=165, avg=29.47, stdev= 9.33 00:32:16.958 clat (usec): min=25653, max=44438, avg=35067.18, stdev=3598.81 00:32:16.958 lat (usec): min=25673, max=44463, avg=35096.64, stdev=3598.19 00:32:16.958 clat percentiles (usec): 00:32:16.958 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:16.958 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:16.958 | 70.00th=[34341], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:32:16.958 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:32:16.958 | 99.99th=[44303] 00:32:16.958 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1811.20, stdev=151.31, samples=20 00:32:16.958 iops : min= 352, max= 480, avg=452.80, stdev=37.83, samples=20 00:32:16.958 lat (msec) : 50=100.00% 00:32:16.958 cpu : usr=97.68%, sys=1.62%, ctx=141, majf=0, minf=29 00:32:16.958 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:16.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.958 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.958 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.958 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.958 filename2: (groupid=0, jobs=1): err= 0: pid=1200909: Fri Nov 15 12:53:55 2024 00:32:16.958 read: IOPS=452, BW=1809KiB/s (1853kB/s)(17.7MiB/10010msec) 00:32:16.958 slat (usec): min=8, max=104, avg=36.09, stdev=16.05 00:32:16.958 clat (usec): min=11539, max=86638, avg=35034.77, stdev=4464.98 00:32:16.958 lat (usec): min=11548, max=86656, avg=35070.86, stdev=4462.76 00:32:16.958 clat percentiles (usec): 00:32:16.958 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:16.958 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:16.958 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43779], 95.00th=[43779], 00:32:16.958 | 99.00th=[44303], 99.50th=[44303], 99.90th=[71828], 99.95th=[71828], 00:32:16.958 | 99.99th=[86508] 00:32:16.958 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1804.80, stdev=170.72, samples=20 00:32:16.958 iops : min= 352, max= 480, avg=451.20, stdev=42.68, samples=20 00:32:16.958 lat (msec) : 20=0.40%, 50=99.25%, 100=0.35% 00:32:16.958 cpu : usr=98.05%, sys=1.49%, ctx=16, majf=0, minf=34 00:32:16.958 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.958 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.958 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.958 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.958 00:32:16.958 Run status group 0 (all jobs): 00:32:16.959 READ: bw=42.5MiB/s (44.5MB/s), 1808KiB/s-1823KiB/s (1851kB/s-1867kB/s), io=426MiB (447MB), run=10005-10031msec 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.959 bdev_null0 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.959 [2024-11-15 12:53:55.898227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.959 bdev_null1 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:16.959 { 00:32:16.959 "params": { 00:32:16.959 "name": "Nvme$subsystem", 00:32:16.959 "trtype": "$TEST_TRANSPORT", 00:32:16.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:16.959 "adrfam": "ipv4", 00:32:16.959 "trsvcid": "$NVMF_PORT", 00:32:16.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:16.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:16.959 "hdgst": ${hdgst:-false}, 00:32:16.959 "ddgst": ${ddgst:-false} 00:32:16.959 }, 00:32:16.959 "method": "bdev_nvme_attach_controller" 00:32:16.959 } 00:32:16.959 EOF 00:32:16.959 )") 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:16.959 12:53:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:16.960 { 00:32:16.960 "params": { 00:32:16.960 "name": "Nvme$subsystem", 00:32:16.960 "trtype": "$TEST_TRANSPORT", 00:32:16.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:16.960 "adrfam": "ipv4", 00:32:16.960 "trsvcid": "$NVMF_PORT", 00:32:16.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:16.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:16.960 "hdgst": ${hdgst:-false}, 00:32:16.960 "ddgst": ${ddgst:-false} 00:32:16.960 }, 00:32:16.960 "method": "bdev_nvme_attach_controller" 00:32:16.960 } 00:32:16.960 EOF 00:32:16.960 )") 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:16.960 "params": { 00:32:16.960 "name": "Nvme0", 00:32:16.960 "trtype": "tcp", 00:32:16.960 "traddr": "10.0.0.2", 00:32:16.960 "adrfam": "ipv4", 00:32:16.960 "trsvcid": "4420", 00:32:16.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:16.960 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:16.960 "hdgst": false, 00:32:16.960 "ddgst": false 00:32:16.960 }, 00:32:16.960 "method": "bdev_nvme_attach_controller" 00:32:16.960 },{ 00:32:16.960 "params": { 00:32:16.960 "name": "Nvme1", 00:32:16.960 "trtype": "tcp", 00:32:16.960 "traddr": "10.0.0.2", 00:32:16.960 "adrfam": "ipv4", 00:32:16.960 "trsvcid": "4420", 00:32:16.960 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:16.960 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:16.960 "hdgst": false, 00:32:16.960 "ddgst": false 00:32:16.960 }, 00:32:16.960 "method": "bdev_nvme_attach_controller" 00:32:16.960 }' 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:16.960 12:53:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:16.960 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:16.960 ... 00:32:16.960 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:16.960 ... 00:32:16.960 fio-3.35 00:32:16.960 Starting 4 threads 00:32:22.226 00:32:22.226 filename0: (groupid=0, jobs=1): err= 0: pid=1202270: Fri Nov 15 12:54:02 2024 00:32:22.226 read: IOPS=1861, BW=14.5MiB/s (15.2MB/s)(72.8MiB/5002msec) 00:32:22.226 slat (nsec): min=4888, max=73144, avg=20146.62, stdev=10873.23 00:32:22.226 clat (usec): min=849, max=7686, avg=4220.94, stdev=584.11 00:32:22.226 lat (usec): min=861, max=7706, avg=4241.09, stdev=584.14 00:32:22.226 clat percentiles (usec): 00:32:22.226 | 1.00th=[ 2343], 5.00th=[ 3458], 10.00th=[ 3785], 20.00th=[ 4015], 00:32:22.226 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:32:22.226 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5080], 00:32:22.226 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7504], 00:32:22.226 | 99.99th=[ 7701] 00:32:22.226 bw ( KiB/s): min=14736, max=15088, per=25.02%, avg=14881.78, stdev=114.39, samples=9 00:32:22.226 iops : min= 1842, max= 1886, avg=1860.22, stdev=14.30, samples=9 00:32:22.226 lat (usec) : 1000=0.05% 00:32:22.226 lat (msec) : 2=0.70%, 4=18.85%, 10=80.40% 00:32:22.226 cpu : usr=95.34%, sys=4.06%, ctx=39, majf=0, minf=37 00:32:22.226 IO depths : 1=0.3%, 2=17.0%, 4=56.2%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:22.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.226 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.226 issued rwts: total=9312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.226 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:22.226 filename0: (groupid=0, jobs=1): err= 0: pid=1202272: Fri Nov 15 12:54:02 2024 00:32:22.226 read: IOPS=1882, BW=14.7MiB/s (15.4MB/s)(73.6MiB/5006msec) 00:32:22.226 slat (usec): min=3, max=115, avg=21.18, stdev= 9.35 00:32:22.226 clat (usec): min=836, max=10421, avg=4173.99, stdev=533.57 00:32:22.226 lat (usec): min=856, max=10442, avg=4195.16, stdev=533.90 00:32:22.226 clat percentiles (usec): 00:32:22.226 | 1.00th=[ 2704], 5.00th=[ 3425], 10.00th=[ 3687], 20.00th=[ 3949], 00:32:22.226 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:32:22.226 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4752], 00:32:22.226 | 99.00th=[ 6194], 99.50th=[ 6849], 99.90th=[ 7570], 99.95th=[10159], 00:32:22.226 | 99.99th=[10421] 00:32:22.226 bw ( KiB/s): min=14848, max=15696, per=25.34%, avg=15067.20, stdev=269.96, samples=10 00:32:22.226 iops : min= 1856, max= 1962, avg=1883.40, stdev=33.74, samples=10 00:32:22.226 lat (usec) : 1000=0.03% 00:32:22.226 lat (msec) : 2=0.27%, 4=22.97%, 10=76.65%, 20=0.08% 00:32:22.226 cpu : usr=95.96%, sys=3.30%, ctx=40, majf=0, minf=60 00:32:22.226 IO depths : 1=0.6%, 2=15.7%, 4=57.3%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:22.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.226 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.226 issued rwts: total=9425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.226 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:22.226 filename1: (groupid=0, jobs=1): err= 0: pid=1202273: Fri Nov 15 12:54:02 2024 00:32:22.226 read: IOPS=1865, BW=14.6MiB/s (15.3MB/s)(72.9MiB/5002msec) 00:32:22.226 slat (nsec): min=4897, max=70742, avg=19481.21, stdev=10669.16 00:32:22.226 clat (usec): min=876, max=7743, avg=4218.83, stdev=601.94 00:32:22.226 lat (usec): min=889, max=7756, avg=4238.31, stdev=602.03 00:32:22.226 clat percentiles (usec): 00:32:22.226 | 1.00th=[ 2311], 5.00th=[ 3425], 10.00th=[ 3752], 20.00th=[ 3982], 00:32:22.226 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:32:22.226 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 5080], 00:32:22.226 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 7504], 99.95th=[ 7635], 00:32:22.226 | 99.99th=[ 7767] 00:32:22.226 bw ( KiB/s): min=14669, max=15360, per=25.08%, avg=14916.50, stdev=193.61, samples=10 00:32:22.226 iops : min= 1833, max= 1920, avg=1864.50, stdev=24.29, samples=10 00:32:22.226 lat (usec) : 1000=0.09% 00:32:22.226 lat (msec) : 2=0.63%, 4=19.76%, 10=79.53% 00:32:22.226 cpu : usr=94.94%, sys=4.60%, ctx=8, majf=0, minf=48 00:32:22.226 IO depths : 1=0.1%, 2=15.4%, 4=57.7%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:22.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.226 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.226 issued rwts: total=9329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.226 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:22.226 filename1: (groupid=0, jobs=1): err= 0: pid=1202274: Fri Nov 15 12:54:02 2024 00:32:22.226 read: IOPS=1867, BW=14.6MiB/s (15.3MB/s)(73.5MiB/5042msec) 00:32:22.226 slat (nsec): min=4890, max=65572, avg=17880.17, stdev=10259.38 00:32:22.226 clat (usec): min=923, max=42963, avg=4194.85, stdev=784.07 00:32:22.226 lat (usec): min=941, max=42980, avg=4212.73, stdev=784.39 00:32:22.226 clat percentiles (usec): 00:32:22.226 | 1.00th=[ 2278], 5.00th=[ 3425], 10.00th=[ 3752], 20.00th=[ 3982], 00:32:22.226 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:32:22.226 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4883], 00:32:22.226 | 99.00th=[ 6390], 99.50th=[ 6849], 99.90th=[ 7439], 99.95th=[ 7898], 00:32:22.226 | 99.99th=[42730] 00:32:22.226 bw ( KiB/s): min=14784, max=15600, per=25.32%, avg=15059.20, stdev=236.33, samples=10 00:32:22.226 iops : min= 1848, max= 1950, avg=1882.40, stdev=29.54, samples=10 00:32:22.226 lat (usec) : 1000=0.06% 00:32:22.226 lat (msec) : 2=0.66%, 4=20.45%, 10=78.81%, 50=0.02% 00:32:22.226 cpu : usr=95.52%, sys=4.01%, ctx=5, majf=0, minf=43 00:32:22.226 IO depths : 1=0.7%, 2=14.3%, 4=57.9%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:22.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.226 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.226 issued rwts: total=9414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.226 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:22.226 00:32:22.226 Run status group 0 (all jobs): 00:32:22.226 READ: bw=58.1MiB/s (60.9MB/s), 14.5MiB/s-14.7MiB/s (15.2MB/s-15.4MB/s), io=293MiB (307MB), run=5002-5042msec 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.226 00:32:22.226 real 0m24.453s 00:32:22.226 user 4m33.780s 00:32:22.226 sys 0m6.316s 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.226 12:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.226 ************************************ 00:32:22.226 END TEST fio_dif_rand_params 00:32:22.226 ************************************ 00:32:22.226 12:54:02 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:22.226 12:54:02 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:22.226 12:54:02 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.226 12:54:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:22.226 ************************************ 00:32:22.226 START TEST fio_dif_digest 00:32:22.226 ************************************ 00:32:22.226 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:32:22.226 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:22.226 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:22.226 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:22.226 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:22.227 bdev_null0 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:22.227 [2024-11-15 12:54:02.524877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:22.227 { 00:32:22.227 "params": { 00:32:22.227 "name": "Nvme$subsystem", 00:32:22.227 "trtype": "$TEST_TRANSPORT", 00:32:22.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:22.227 "adrfam": "ipv4", 00:32:22.227 "trsvcid": "$NVMF_PORT", 00:32:22.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:22.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:22.227 "hdgst": ${hdgst:-false}, 00:32:22.227 "ddgst": ${ddgst:-false} 00:32:22.227 }, 00:32:22.227 "method": "bdev_nvme_attach_controller" 00:32:22.227 } 00:32:22.227 EOF 00:32:22.227 )") 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:22.227 "params": { 00:32:22.227 "name": "Nvme0", 00:32:22.227 "trtype": "tcp", 00:32:22.227 "traddr": "10.0.0.2", 00:32:22.227 "adrfam": "ipv4", 00:32:22.227 "trsvcid": "4420", 00:32:22.227 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.227 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:22.227 "hdgst": true, 00:32:22.227 "ddgst": true 00:32:22.227 }, 00:32:22.227 "method": "bdev_nvme_attach_controller" 00:32:22.227 }' 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:22.227 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:22.485 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:22.485 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:22.485 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:22.485 12:54:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:22.485 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:22.485 ... 00:32:22.485 fio-3.35 00:32:22.485 Starting 3 threads 00:32:34.679 00:32:34.679 filename0: (groupid=0, jobs=1): err= 0: pid=1203157: Fri Nov 15 12:54:13 2024 00:32:34.679 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(269MiB/10046msec) 00:32:34.679 slat (nsec): min=5663, max=69825, avg=19606.49, stdev=4830.35 00:32:34.679 clat (usec): min=10911, max=54686, avg=13988.74, stdev=2114.77 00:32:34.679 lat (usec): min=10931, max=54725, avg=14008.34, stdev=2114.90 00:32:34.679 clat percentiles (usec): 00:32:34.679 | 1.00th=[11600], 5.00th=[12387], 10.00th=[12649], 20.00th=[13042], 00:32:34.679 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:32:34.679 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[15401], 00:32:34.679 | 99.00th=[16712], 99.50th=[17695], 99.90th=[54789], 99.95th=[54789], 00:32:34.679 | 99.99th=[54789] 00:32:34.679 bw ( KiB/s): min=24320, max=28416, per=34.68%, avg=27456.00, stdev=858.65, samples=20 00:32:34.679 iops : min= 190, max= 222, avg=214.50, stdev= 6.71, samples=20 00:32:34.679 lat (msec) : 20=99.77%, 50=0.05%, 100=0.19% 00:32:34.679 cpu : usr=95.05%, sys=4.44%, ctx=21, majf=0, minf=175 00:32:34.679 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:34.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.679 issued rwts: total=2148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:34.679 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:34.679 filename0: (groupid=0, jobs=1): err= 0: pid=1203158: Fri Nov 15 12:54:13 2024 00:32:34.679 read: IOPS=198, BW=24.8MiB/s (26.1MB/s)(250MiB/10042msec) 00:32:34.679 slat (nsec): min=5299, max=45817, avg=16365.98, stdev=4566.64 00:32:34.679 clat (usec): min=9697, max=54450, avg=15053.57, stdev=1533.43 00:32:34.679 lat (usec): min=9714, max=54463, avg=15069.94, stdev=1533.45 00:32:34.679 clat percentiles (usec): 00:32:34.679 | 1.00th=[12518], 5.00th=[13435], 10.00th=[13829], 20.00th=[14222], 00:32:34.679 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:32:34.679 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16319], 95.00th=[16712], 00:32:34.679 | 99.00th=[17695], 99.50th=[17957], 99.90th=[47449], 99.95th=[54264], 00:32:34.679 | 99.99th=[54264] 00:32:34.679 bw ( KiB/s): min=24832, max=26368, per=32.24%, avg=25525.70, stdev=369.45, samples=20 00:32:34.679 iops : min= 194, max= 206, avg=199.40, stdev= 2.91, samples=20 00:32:34.679 lat (msec) : 10=0.10%, 20=99.80%, 50=0.05%, 100=0.05% 00:32:34.679 cpu : usr=94.33%, sys=5.17%, ctx=19, majf=0, minf=165 00:32:34.679 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:34.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.679 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:34.679 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:34.679 filename0: (groupid=0, jobs=1): err= 0: pid=1203159: Fri Nov 15 12:54:13 2024 00:32:34.679 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(259MiB/10044msec) 00:32:34.679 slat (nsec): min=5179, max=42878, avg=16247.97, stdev=4474.83 00:32:34.679 clat (usec): min=8548, max=50876, avg=14523.13, stdev=1557.31 00:32:34.679 lat (usec): min=8565, max=50894, avg=14539.38, stdev=1557.38 00:32:34.679 clat percentiles (usec): 00:32:34.679 | 1.00th=[11863], 5.00th=[12780], 10.00th=[13304], 20.00th=[13698], 00:32:34.679 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:32:34.679 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:32:34.679 | 99.00th=[16909], 99.50th=[17433], 99.90th=[20317], 99.95th=[51119], 00:32:34.679 | 99.99th=[51119] 00:32:34.679 bw ( KiB/s): min=25600, max=28103, per=33.42%, avg=26454.75, stdev=603.63, samples=20 00:32:34.679 iops : min= 200, max= 219, avg=206.65, stdev= 4.64, samples=20 00:32:34.679 lat (msec) : 10=0.48%, 20=99.32%, 50=0.10%, 100=0.10% 00:32:34.679 cpu : usr=94.75%, sys=4.75%, ctx=19, majf=0, minf=140 00:32:34.679 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:34.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.679 issued rwts: total=2069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:34.679 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:34.679 00:32:34.679 Run status group 0 (all jobs): 00:32:34.679 READ: bw=77.3MiB/s (81.1MB/s), 24.8MiB/s-26.7MiB/s (26.1MB/s-28.0MB/s), io=777MiB (814MB), run=10042-10046msec 00:32:34.679 12:54:13 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:34.679 12:54:13 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:34.679 12:54:13 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:34.679 12:54:13 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:34.679 12:54:13 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:34.679 12:54:13 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:34.679 12:54:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.679 12:54:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:34.679 12:54:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.679 12:54:13 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:34.679 12:54:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.679 12:54:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:34.679 12:54:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.679 00:32:34.679 real 0m11.242s 00:32:34.679 user 0m29.736s 00:32:34.679 sys 0m1.733s 00:32:34.679 12:54:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:34.679 12:54:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:34.679 ************************************ 00:32:34.679 END TEST fio_dif_digest 00:32:34.679 ************************************ 00:32:34.679 12:54:13 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:34.679 12:54:13 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:34.679 12:54:13 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:34.679 12:54:13 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:32:34.679 12:54:13 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:34.679 12:54:13 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:32:34.679 12:54:13 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:34.679 12:54:13 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:34.679 rmmod nvme_tcp 00:32:34.679 rmmod nvme_fabrics 00:32:34.679 rmmod nvme_keyring 00:32:34.679 12:54:13 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:34.679 12:54:13 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:32:34.679 12:54:13 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:32:34.679 12:54:13 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1196992 ']' 00:32:34.679 12:54:13 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1196992 00:32:34.679 12:54:13 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1196992 ']' 00:32:34.679 12:54:13 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1196992 00:32:34.679 12:54:13 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:32:34.679 12:54:13 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:34.679 12:54:13 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196992 00:32:34.679 12:54:13 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:34.679 12:54:13 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:34.679 12:54:13 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196992' 00:32:34.679 killing process with pid 1196992 00:32:34.679 12:54:13 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1196992 00:32:34.679 12:54:13 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1196992 00:32:34.680 12:54:14 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:34.680 12:54:14 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:34.937 Waiting for block devices as requested 00:32:34.937 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:35.196 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:35.196 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:35.456 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:35.456 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:35.456 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:35.456 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:35.714 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:35.714 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:35.714 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:35.714 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:35.974 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:35.974 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:35.974 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:35.974 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:36.235 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:36.235 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:36.235 12:54:16 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:36.235 12:54:16 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:36.235 12:54:16 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:32:36.235 12:54:16 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:32:36.235 12:54:16 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:36.235 12:54:16 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:32:36.235 12:54:16 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:36.235 12:54:16 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:36.235 12:54:16 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.235 12:54:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:36.235 12:54:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.776 12:54:18 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:38.776 00:32:38.776 real 1m7.588s 00:32:38.776 user 6m32.422s 00:32:38.776 sys 0m17.066s 00:32:38.776 12:54:18 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:38.776 12:54:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:38.776 ************************************ 00:32:38.776 END TEST nvmf_dif 00:32:38.776 ************************************ 00:32:38.776 12:54:18 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:38.776 12:54:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:38.776 12:54:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:38.776 12:54:18 -- common/autotest_common.sh@10 -- # set +x 00:32:38.776 ************************************ 00:32:38.776 START TEST nvmf_abort_qd_sizes 00:32:38.777 ************************************ 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:38.777 * Looking for test storage... 00:32:38.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:38.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.777 --rc genhtml_branch_coverage=1 00:32:38.777 --rc genhtml_function_coverage=1 00:32:38.777 --rc genhtml_legend=1 00:32:38.777 --rc geninfo_all_blocks=1 00:32:38.777 --rc geninfo_unexecuted_blocks=1 00:32:38.777 00:32:38.777 ' 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:38.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.777 --rc genhtml_branch_coverage=1 00:32:38.777 --rc genhtml_function_coverage=1 00:32:38.777 --rc genhtml_legend=1 00:32:38.777 --rc geninfo_all_blocks=1 00:32:38.777 --rc geninfo_unexecuted_blocks=1 00:32:38.777 00:32:38.777 ' 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:38.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.777 --rc genhtml_branch_coverage=1 00:32:38.777 --rc genhtml_function_coverage=1 00:32:38.777 --rc genhtml_legend=1 00:32:38.777 --rc geninfo_all_blocks=1 00:32:38.777 --rc geninfo_unexecuted_blocks=1 00:32:38.777 00:32:38.777 ' 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:38.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.777 --rc genhtml_branch_coverage=1 00:32:38.777 --rc genhtml_function_coverage=1 00:32:38.777 --rc genhtml_legend=1 00:32:38.777 --rc geninfo_all_blocks=1 00:32:38.777 --rc geninfo_unexecuted_blocks=1 00:32:38.777 00:32:38.777 ' 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:38.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:32:38.777 12:54:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:40.680 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:40.681 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:40.681 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:40.681 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:40.681 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:40.681 12:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:40.681 12:54:21 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:40.681 12:54:21 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:40.957 12:54:21 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:40.957 12:54:21 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:40.957 12:54:21 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:40.957 12:54:21 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:40.957 12:54:21 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:40.957 12:54:21 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:40.957 12:54:21 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:40.957 12:54:21 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:40.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:40.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:32:40.957 00:32:40.957 --- 10.0.0.2 ping statistics --- 00:32:40.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.957 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:32:40.957 12:54:21 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:40.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:40.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:32:40.957 00:32:40.957 --- 10.0.0.1 ping statistics --- 00:32:40.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.957 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:32:40.957 12:54:21 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:40.957 12:54:21 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:32:40.957 12:54:21 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:40.957 12:54:21 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:41.928 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:41.928 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:41.928 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:42.188 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:42.188 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:42.188 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:42.188 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:42.188 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:42.188 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:42.188 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:42.188 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:42.188 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:42.188 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:42.188 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:42.188 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:42.188 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:43.125 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1208609 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1208609 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1208609 ']' 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.125 12:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:43.384 [2024-11-15 12:54:23.513588] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:32:43.384 [2024-11-15 12:54:23.513682] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.384 [2024-11-15 12:54:23.588690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:43.384 [2024-11-15 12:54:23.649002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.384 [2024-11-15 12:54:23.649077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.384 [2024-11-15 12:54:23.649091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.384 [2024-11-15 12:54:23.649110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.384 [2024-11-15 12:54:23.649119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.384 [2024-11-15 12:54:23.650533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.384 [2024-11-15 12:54:23.650592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:43.384 [2024-11-15 12:54:23.650659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:43.384 [2024-11-15 12:54:23.650663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:43.642 12:54:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:43.642 ************************************ 00:32:43.642 START TEST spdk_target_abort 00:32:43.642 ************************************ 00:32:43.642 12:54:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:32:43.642 12:54:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:43.642 12:54:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:32:43.642 12:54:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.642 12:54:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:46.949 spdk_targetn1 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:46.949 [2024-11-15 12:54:26.678874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:46.949 [2024-11-15 12:54:26.723388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:46.949 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:46.950 12:54:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:50.230 Initializing NVMe Controllers 00:32:50.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:50.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:50.230 Initialization complete. Launching workers. 00:32:50.230 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12095, failed: 0 00:32:50.230 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1220, failed to submit 10875 00:32:50.230 success 767, unsuccessful 453, failed 0 00:32:50.230 12:54:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:50.230 12:54:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:53.508 Initializing NVMe Controllers 00:32:53.508 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:53.508 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:53.508 Initialization complete. Launching workers. 00:32:53.508 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8714, failed: 0 00:32:53.508 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1241, failed to submit 7473 00:32:53.508 success 305, unsuccessful 936, failed 0 00:32:53.508 12:54:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:53.508 12:54:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:56.787 Initializing NVMe Controllers 00:32:56.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:56.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:56.787 Initialization complete. Launching workers. 00:32:56.787 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31954, failed: 0 00:32:56.787 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2639, failed to submit 29315 00:32:56.787 success 529, unsuccessful 2110, failed 0 00:32:56.787 12:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:56.787 12:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.787 12:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:56.787 12:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.787 12:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:56.787 12:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.787 12:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:57.721 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.721 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1208609 00:32:57.721 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1208609 ']' 00:32:57.721 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1208609 00:32:57.721 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:32:57.721 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.721 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1208609 00:32:57.721 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:57.721 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:57.721 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1208609' 00:32:57.721 killing process with pid 1208609 00:32:57.721 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1208609 00:32:57.721 12:54:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1208609 00:32:57.980 00:32:57.980 real 0m14.288s 00:32:57.980 user 0m53.836s 00:32:57.980 sys 0m2.818s 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:57.980 ************************************ 00:32:57.980 END TEST spdk_target_abort 00:32:57.980 ************************************ 00:32:57.980 12:54:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:57.980 12:54:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:57.980 12:54:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:57.980 12:54:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:57.980 ************************************ 00:32:57.980 START TEST kernel_target_abort 00:32:57.980 ************************************ 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:57.980 12:54:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:59.358 Waiting for block devices as requested 00:32:59.358 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:59.358 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:59.358 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:59.616 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:59.616 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:59.616 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:59.874 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:59.874 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:59.874 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:59.874 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:00.132 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:00.132 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:00.132 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:00.132 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:00.390 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:00.390 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:00.390 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:00.649 No valid GPT data, bailing 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:00.649 00:33:00.649 Discovery Log Number of Records 2, Generation counter 2 00:33:00.649 =====Discovery Log Entry 0====== 00:33:00.649 trtype: tcp 00:33:00.649 adrfam: ipv4 00:33:00.649 subtype: current discovery subsystem 00:33:00.649 treq: not specified, sq flow control disable supported 00:33:00.649 portid: 1 00:33:00.649 trsvcid: 4420 00:33:00.649 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:00.649 traddr: 10.0.0.1 00:33:00.649 eflags: none 00:33:00.649 sectype: none 00:33:00.649 =====Discovery Log Entry 1====== 00:33:00.649 trtype: tcp 00:33:00.649 adrfam: ipv4 00:33:00.649 subtype: nvme subsystem 00:33:00.649 treq: not specified, sq flow control disable supported 00:33:00.649 portid: 1 00:33:00.649 trsvcid: 4420 00:33:00.649 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:00.649 traddr: 10.0.0.1 00:33:00.649 eflags: none 00:33:00.649 sectype: none 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:00.649 12:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:03.928 Initializing NVMe Controllers 00:33:03.928 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:03.928 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:03.928 Initialization complete. Launching workers. 00:33:03.928 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57010, failed: 0 00:33:03.928 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 57010, failed to submit 0 00:33:03.928 success 0, unsuccessful 57010, failed 0 00:33:03.928 12:54:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:03.929 12:54:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:07.208 Initializing NVMe Controllers 00:33:07.208 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:07.208 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:07.208 Initialization complete. Launching workers. 00:33:07.208 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100202, failed: 0 00:33:07.208 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25242, failed to submit 74960 00:33:07.208 success 0, unsuccessful 25242, failed 0 00:33:07.208 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:07.208 12:54:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:10.491 Initializing NVMe Controllers 00:33:10.491 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:10.491 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:10.491 Initialization complete. Launching workers. 00:33:10.491 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96150, failed: 0 00:33:10.491 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24046, failed to submit 72104 00:33:10.491 success 0, unsuccessful 24046, failed 0 00:33:10.491 12:54:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:10.491 12:54:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:10.491 12:54:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:33:10.491 12:54:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:10.491 12:54:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:10.491 12:54:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:10.491 12:54:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:10.491 12:54:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:10.491 12:54:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:10.491 12:54:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:11.428 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:11.428 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:11.428 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:11.428 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:11.428 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:11.428 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:11.428 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:11.428 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:11.428 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:11.428 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:11.428 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:11.428 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:11.428 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:11.428 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:11.428 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:11.428 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:12.368 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:12.368 00:33:12.368 real 0m14.438s 00:33:12.368 user 0m6.726s 00:33:12.368 sys 0m3.217s 00:33:12.368 12:54:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:12.368 12:54:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:12.368 ************************************ 00:33:12.368 END TEST kernel_target_abort 00:33:12.368 ************************************ 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:12.368 rmmod nvme_tcp 00:33:12.368 rmmod nvme_fabrics 00:33:12.368 rmmod nvme_keyring 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1208609 ']' 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1208609 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1208609 ']' 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1208609 00:33:12.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1208609) - No such process 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1208609 is not found' 00:33:12.368 Process with pid 1208609 is not found 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:12.368 12:54:52 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:13.743 Waiting for block devices as requested 00:33:13.743 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:13.743 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:13.743 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:14.003 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:14.003 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:14.003 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:14.264 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:14.264 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:14.264 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:14.264 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:14.524 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:14.524 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:14.524 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:14.784 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:14.784 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:14.784 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:14.784 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:15.045 12:54:55 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:15.045 12:54:55 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:15.045 12:54:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:33:15.045 12:54:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:33:15.045 12:54:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:15.045 12:54:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:33:15.045 12:54:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:15.045 12:54:55 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:15.045 12:54:55 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.045 12:54:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:15.045 12:54:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.956 12:54:57 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:16.956 00:33:16.956 real 0m38.544s 00:33:16.956 user 1m2.859s 00:33:16.956 sys 0m9.621s 00:33:16.956 12:54:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:16.956 12:54:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:16.956 ************************************ 00:33:16.956 END TEST nvmf_abort_qd_sizes 00:33:16.956 ************************************ 00:33:16.956 12:54:57 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:16.956 12:54:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:16.956 12:54:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:16.956 12:54:57 -- common/autotest_common.sh@10 -- # set +x 00:33:16.956 ************************************ 00:33:16.956 START TEST keyring_file 00:33:16.956 ************************************ 00:33:16.956 12:54:57 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:16.956 * Looking for test storage... 00:33:17.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:17.215 12:54:57 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:17.215 12:54:57 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:33:17.215 12:54:57 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:17.215 12:54:57 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@345 -- # : 1 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@353 -- # local d=1 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@355 -- # echo 1 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@353 -- # local d=2 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@355 -- # echo 2 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:17.215 12:54:57 keyring_file -- scripts/common.sh@368 -- # return 0 00:33:17.215 12:54:57 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:17.215 12:54:57 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:17.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.215 --rc genhtml_branch_coverage=1 00:33:17.215 --rc genhtml_function_coverage=1 00:33:17.215 --rc genhtml_legend=1 00:33:17.215 --rc geninfo_all_blocks=1 00:33:17.215 --rc geninfo_unexecuted_blocks=1 00:33:17.215 00:33:17.215 ' 00:33:17.215 12:54:57 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:17.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.215 --rc genhtml_branch_coverage=1 00:33:17.215 --rc genhtml_function_coverage=1 00:33:17.215 --rc genhtml_legend=1 00:33:17.215 --rc geninfo_all_blocks=1 00:33:17.215 --rc geninfo_unexecuted_blocks=1 00:33:17.215 00:33:17.215 ' 00:33:17.215 12:54:57 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:17.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.215 --rc genhtml_branch_coverage=1 00:33:17.215 --rc genhtml_function_coverage=1 00:33:17.215 --rc genhtml_legend=1 00:33:17.215 --rc geninfo_all_blocks=1 00:33:17.215 --rc geninfo_unexecuted_blocks=1 00:33:17.215 00:33:17.215 ' 00:33:17.215 12:54:57 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:17.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.215 --rc genhtml_branch_coverage=1 00:33:17.215 --rc genhtml_function_coverage=1 00:33:17.215 --rc genhtml_legend=1 00:33:17.215 --rc geninfo_all_blocks=1 00:33:17.215 --rc geninfo_unexecuted_blocks=1 00:33:17.215 00:33:17.215 ' 00:33:17.215 12:54:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:17.215 12:54:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:17.215 12:54:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.216 12:54:57 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:33:17.216 12:54:57 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.216 12:54:57 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.216 12:54:57 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.216 12:54:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.216 12:54:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.216 12:54:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.216 12:54:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:17.216 12:54:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@51 -- # : 0 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:17.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:17.216 12:54:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:17.216 12:54:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:17.216 12:54:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:17.216 12:54:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:17.216 12:54:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:17.216 12:54:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.u6rxi6fZSm 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.u6rxi6fZSm 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.u6rxi6fZSm 00:33:17.216 12:54:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.u6rxi6fZSm 00:33:17.216 12:54:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.crdo7BGxvF 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:17.216 12:54:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.crdo7BGxvF 00:33:17.216 12:54:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.crdo7BGxvF 00:33:17.216 12:54:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.crdo7BGxvF 00:33:17.216 12:54:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=1214377 00:33:17.216 12:54:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:17.216 12:54:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1214377 00:33:17.216 12:54:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1214377 ']' 00:33:17.216 12:54:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.216 12:54:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:17.216 12:54:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.216 12:54:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:17.216 12:54:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:17.216 [2024-11-15 12:54:57.537545] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:33:17.216 [2024-11-15 12:54:57.537619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214377 ] 00:33:17.474 [2024-11-15 12:54:57.601415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.474 [2024-11-15 12:54:57.657005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:33:17.734 12:54:57 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:17.734 [2024-11-15 12:54:57.905473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.734 null0 00:33:17.734 [2024-11-15 12:54:57.937536] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:17.734 [2024-11-15 12:54:57.938029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.734 12:54:57 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:17.734 [2024-11-15 12:54:57.961580] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:17.734 request: 00:33:17.734 { 00:33:17.734 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.734 "secure_channel": false, 00:33:17.734 "listen_address": { 00:33:17.734 "trtype": "tcp", 00:33:17.734 "traddr": "127.0.0.1", 00:33:17.734 "trsvcid": "4420" 00:33:17.734 }, 00:33:17.734 "method": "nvmf_subsystem_add_listener", 00:33:17.734 "req_id": 1 00:33:17.734 } 00:33:17.734 Got JSON-RPC error response 00:33:17.734 response: 00:33:17.734 { 00:33:17.734 "code": -32602, 00:33:17.734 "message": "Invalid parameters" 00:33:17.734 } 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:17.734 12:54:57 keyring_file -- keyring/file.sh@47 -- # bperfpid=1214388 00:33:17.734 12:54:57 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:17.734 12:54:57 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1214388 /var/tmp/bperf.sock 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1214388 ']' 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:17.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:17.734 12:54:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:17.734 [2024-11-15 12:54:58.009759] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:33:17.734 [2024-11-15 12:54:58.009837] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214388 ] 00:33:17.734 [2024-11-15 12:54:58.073934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.993 [2024-11-15 12:54:58.131761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.993 12:54:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.993 12:54:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:33:17.993 12:54:58 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.u6rxi6fZSm 00:33:17.993 12:54:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.u6rxi6fZSm 00:33:18.251 12:54:58 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.crdo7BGxvF 00:33:18.251 12:54:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.crdo7BGxvF 00:33:18.509 12:54:58 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:33:18.509 12:54:58 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:18.509 12:54:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:18.509 12:54:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.509 12:54:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:18.768 12:54:59 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.u6rxi6fZSm == \/\t\m\p\/\t\m\p\.\u\6\r\x\i\6\f\Z\S\m ]] 00:33:18.768 12:54:59 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:33:18.768 12:54:59 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:33:18.768 12:54:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:18.768 12:54:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.768 12:54:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:19.026 12:54:59 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.crdo7BGxvF == \/\t\m\p\/\t\m\p\.\c\r\d\o\7\B\G\x\v\F ]] 00:33:19.026 12:54:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:33:19.026 12:54:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:19.026 12:54:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:19.026 12:54:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:19.026 12:54:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.026 12:54:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:19.284 12:54:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:19.284 12:54:59 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:33:19.284 12:54:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:19.284 12:54:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:19.284 12:54:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:19.284 12:54:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.284 12:54:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:19.850 12:54:59 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:33:19.851 12:54:59 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:19.851 12:54:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:19.851 [2024-11-15 12:55:00.155216] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:20.109 nvme0n1 00:33:20.109 12:55:00 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:33:20.109 12:55:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:20.109 12:55:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:20.109 12:55:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:20.109 12:55:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:20.109 12:55:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:20.367 12:55:00 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:33:20.367 12:55:00 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:33:20.367 12:55:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:20.367 12:55:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:20.367 12:55:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:20.367 12:55:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:20.367 12:55:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:20.625 12:55:00 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:33:20.625 12:55:00 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:20.625 Running I/O for 1 seconds... 00:33:22.000 10065.00 IOPS, 39.32 MiB/s 00:33:22.000 Latency(us) 00:33:22.000 [2024-11-15T11:55:02.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.000 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:22.000 nvme0n1 : 1.01 10116.81 39.52 0.00 0.00 12612.53 5194.33 19320.98 00:33:22.000 [2024-11-15T11:55:02.344Z] =================================================================================================================== 00:33:22.000 [2024-11-15T11:55:02.344Z] Total : 10116.81 39.52 0.00 0.00 12612.53 5194.33 19320.98 00:33:22.000 { 00:33:22.000 "results": [ 00:33:22.000 { 00:33:22.000 "job": "nvme0n1", 00:33:22.000 "core_mask": "0x2", 00:33:22.000 "workload": "randrw", 00:33:22.000 "percentage": 50, 00:33:22.000 "status": "finished", 00:33:22.000 "queue_depth": 128, 00:33:22.000 "io_size": 4096, 00:33:22.000 "runtime": 1.007729, 00:33:22.000 "iops": 10116.80719717305, 00:33:22.000 "mibps": 39.518778113957225, 00:33:22.000 "io_failed": 0, 00:33:22.000 "io_timeout": 0, 00:33:22.000 "avg_latency_us": 12612.531943545311, 00:33:22.000 "min_latency_us": 5194.334814814815, 00:33:22.000 "max_latency_us": 19320.983703703703 00:33:22.000 } 00:33:22.000 ], 00:33:22.000 "core_count": 1 00:33:22.000 } 00:33:22.000 12:55:01 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:22.000 12:55:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:22.000 12:55:02 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:33:22.000 12:55:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:22.000 12:55:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:22.000 12:55:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:22.000 12:55:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:22.000 12:55:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:22.258 12:55:02 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:22.258 12:55:02 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:33:22.258 12:55:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:22.258 12:55:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:22.258 12:55:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:22.258 12:55:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:22.258 12:55:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:22.517 12:55:02 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:33:22.517 12:55:02 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:22.517 12:55:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:22.517 12:55:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:22.517 12:55:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:22.517 12:55:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:22.517 12:55:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:22.517 12:55:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:22.517 12:55:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:22.517 12:55:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:22.776 [2024-11-15 12:55:03.035745] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:22.776 [2024-11-15 12:55:03.035975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec4510 (107): Transport endpoint is not connected 00:33:22.776 [2024-11-15 12:55:03.036967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec4510 (9): Bad file descriptor 00:33:22.776 [2024-11-15 12:55:03.037966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:22.776 [2024-11-15 12:55:03.037987] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:22.776 [2024-11-15 12:55:03.038028] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:22.776 [2024-11-15 12:55:03.038043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:22.776 request: 00:33:22.776 { 00:33:22.776 "name": "nvme0", 00:33:22.776 "trtype": "tcp", 00:33:22.776 "traddr": "127.0.0.1", 00:33:22.776 "adrfam": "ipv4", 00:33:22.776 "trsvcid": "4420", 00:33:22.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:22.776 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:22.776 "prchk_reftag": false, 00:33:22.776 "prchk_guard": false, 00:33:22.776 "hdgst": false, 00:33:22.776 "ddgst": false, 00:33:22.776 "psk": "key1", 00:33:22.776 "allow_unrecognized_csi": false, 00:33:22.776 "method": "bdev_nvme_attach_controller", 00:33:22.776 "req_id": 1 00:33:22.776 } 00:33:22.776 Got JSON-RPC error response 00:33:22.776 response: 00:33:22.776 { 00:33:22.776 "code": -5, 00:33:22.776 "message": "Input/output error" 00:33:22.776 } 00:33:22.776 12:55:03 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:22.776 12:55:03 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:22.776 12:55:03 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:22.776 12:55:03 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:22.776 12:55:03 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:33:22.776 12:55:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:22.776 12:55:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:22.776 12:55:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:22.776 12:55:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:22.776 12:55:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:23.034 12:55:03 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:23.034 12:55:03 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:33:23.034 12:55:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:23.034 12:55:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:23.034 12:55:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:23.034 12:55:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:23.034 12:55:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.291 12:55:03 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:33:23.291 12:55:03 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:33:23.291 12:55:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:23.550 12:55:03 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:33:23.550 12:55:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:23.808 12:55:04 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:33:23.808 12:55:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.808 12:55:04 keyring_file -- keyring/file.sh@78 -- # jq length 00:33:24.375 12:55:04 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:33:24.375 12:55:04 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.u6rxi6fZSm 00:33:24.375 12:55:04 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.u6rxi6fZSm 00:33:24.375 12:55:04 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:24.375 12:55:04 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.u6rxi6fZSm 00:33:24.375 12:55:04 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:24.375 12:55:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:24.375 12:55:04 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:24.375 12:55:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:24.375 12:55:04 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.u6rxi6fZSm 00:33:24.375 12:55:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.u6rxi6fZSm 00:33:24.375 [2024-11-15 12:55:04.656976] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.u6rxi6fZSm': 0100660 00:33:24.375 [2024-11-15 12:55:04.657009] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:24.375 request: 00:33:24.375 { 00:33:24.375 "name": "key0", 00:33:24.375 "path": "/tmp/tmp.u6rxi6fZSm", 00:33:24.375 "method": "keyring_file_add_key", 00:33:24.375 "req_id": 1 00:33:24.375 } 00:33:24.375 Got JSON-RPC error response 00:33:24.375 response: 00:33:24.375 { 00:33:24.375 "code": -1, 00:33:24.375 "message": "Operation not permitted" 00:33:24.375 } 00:33:24.375 12:55:04 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:24.375 12:55:04 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:24.375 12:55:04 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:24.375 12:55:04 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:24.375 12:55:04 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.u6rxi6fZSm 00:33:24.375 12:55:04 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.u6rxi6fZSm 00:33:24.375 12:55:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.u6rxi6fZSm 00:33:24.633 12:55:04 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.u6rxi6fZSm 00:33:24.633 12:55:04 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:33:24.633 12:55:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:24.633 12:55:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:24.633 12:55:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:24.633 12:55:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:24.633 12:55:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:24.891 12:55:05 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:33:24.891 12:55:05 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:24.891 12:55:05 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:24.891 12:55:05 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:24.891 12:55:05 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:25.150 12:55:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:25.150 12:55:05 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:25.150 12:55:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:25.150 12:55:05 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:25.150 12:55:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:25.150 [2024-11-15 12:55:05.483285] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.u6rxi6fZSm': No such file or directory 00:33:25.150 [2024-11-15 12:55:05.483327] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:25.150 [2024-11-15 12:55:05.483350] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:25.150 [2024-11-15 12:55:05.483362] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:33:25.150 [2024-11-15 12:55:05.483375] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:25.150 [2024-11-15 12:55:05.483386] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:25.150 request: 00:33:25.150 { 00:33:25.150 "name": "nvme0", 00:33:25.150 "trtype": "tcp", 00:33:25.150 "traddr": "127.0.0.1", 00:33:25.150 "adrfam": "ipv4", 00:33:25.150 "trsvcid": "4420", 00:33:25.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:25.150 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:25.150 "prchk_reftag": false, 00:33:25.150 "prchk_guard": false, 00:33:25.150 "hdgst": false, 00:33:25.150 "ddgst": false, 00:33:25.150 "psk": "key0", 00:33:25.150 "allow_unrecognized_csi": false, 00:33:25.150 "method": "bdev_nvme_attach_controller", 00:33:25.150 "req_id": 1 00:33:25.150 } 00:33:25.150 Got JSON-RPC error response 00:33:25.150 response: 00:33:25.150 { 00:33:25.150 "code": -19, 00:33:25.150 "message": "No such device" 00:33:25.150 } 00:33:25.408 12:55:05 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:25.408 12:55:05 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:25.408 12:55:05 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:25.408 12:55:05 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:25.408 12:55:05 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:33:25.408 12:55:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:25.666 12:55:05 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:25.666 12:55:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:25.666 12:55:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:25.666 12:55:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:25.666 12:55:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:25.666 12:55:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:25.666 12:55:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MF57S8NWR1 00:33:25.666 12:55:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:25.666 12:55:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:25.666 12:55:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:25.666 12:55:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:25.666 12:55:05 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:25.666 12:55:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:25.666 12:55:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:25.666 12:55:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MF57S8NWR1 00:33:25.666 12:55:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MF57S8NWR1 00:33:25.666 12:55:05 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.MF57S8NWR1 00:33:25.666 12:55:05 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MF57S8NWR1 00:33:25.666 12:55:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MF57S8NWR1 00:33:25.924 12:55:06 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:25.924 12:55:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:26.182 nvme0n1 00:33:26.182 12:55:06 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:33:26.182 12:55:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:26.182 12:55:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:26.182 12:55:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:26.182 12:55:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:26.182 12:55:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:26.438 12:55:06 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:33:26.439 12:55:06 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:33:26.439 12:55:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:26.695 12:55:06 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:33:26.695 12:55:06 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:33:26.695 12:55:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:26.695 12:55:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:26.695 12:55:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:26.953 12:55:07 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:33:26.953 12:55:07 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:33:26.953 12:55:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:26.953 12:55:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:26.953 12:55:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:26.953 12:55:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:26.953 12:55:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:27.211 12:55:07 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:33:27.211 12:55:07 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:27.211 12:55:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:27.468 12:55:07 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:33:27.468 12:55:07 keyring_file -- keyring/file.sh@105 -- # jq length 00:33:27.468 12:55:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:27.726 12:55:08 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:33:27.726 12:55:08 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MF57S8NWR1 00:33:27.726 12:55:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MF57S8NWR1 00:33:27.983 12:55:08 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.crdo7BGxvF 00:33:27.983 12:55:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.crdo7BGxvF 00:33:28.549 12:55:08 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:28.549 12:55:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:28.807 nvme0n1 00:33:28.807 12:55:08 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:33:28.807 12:55:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:29.065 12:55:09 keyring_file -- keyring/file.sh@113 -- # config='{ 00:33:29.065 "subsystems": [ 00:33:29.065 { 00:33:29.065 "subsystem": "keyring", 00:33:29.065 "config": [ 00:33:29.065 { 00:33:29.065 "method": "keyring_file_add_key", 00:33:29.065 "params": { 00:33:29.065 "name": "key0", 00:33:29.065 "path": "/tmp/tmp.MF57S8NWR1" 00:33:29.065 } 00:33:29.065 }, 00:33:29.065 { 00:33:29.065 "method": "keyring_file_add_key", 00:33:29.065 "params": { 00:33:29.065 "name": "key1", 00:33:29.065 "path": "/tmp/tmp.crdo7BGxvF" 00:33:29.065 } 00:33:29.065 } 00:33:29.065 ] 00:33:29.065 }, 00:33:29.065 { 00:33:29.065 "subsystem": "iobuf", 00:33:29.065 "config": [ 00:33:29.065 { 00:33:29.065 "method": "iobuf_set_options", 00:33:29.065 "params": { 00:33:29.065 "small_pool_count": 8192, 00:33:29.065 "large_pool_count": 1024, 00:33:29.065 "small_bufsize": 8192, 00:33:29.065 "large_bufsize": 135168, 00:33:29.066 "enable_numa": false 00:33:29.066 } 00:33:29.066 } 00:33:29.066 ] 00:33:29.066 }, 00:33:29.066 { 00:33:29.066 "subsystem": "sock", 00:33:29.066 "config": [ 00:33:29.066 { 00:33:29.066 "method": "sock_set_default_impl", 00:33:29.066 "params": { 00:33:29.066 "impl_name": "posix" 00:33:29.066 } 00:33:29.066 }, 00:33:29.066 { 00:33:29.066 "method": "sock_impl_set_options", 00:33:29.066 "params": { 00:33:29.066 "impl_name": "ssl", 00:33:29.066 "recv_buf_size": 4096, 00:33:29.066 "send_buf_size": 4096, 00:33:29.066 "enable_recv_pipe": true, 00:33:29.066 "enable_quickack": false, 00:33:29.066 "enable_placement_id": 0, 00:33:29.066 "enable_zerocopy_send_server": true, 00:33:29.066 "enable_zerocopy_send_client": false, 00:33:29.066 "zerocopy_threshold": 0, 00:33:29.066 "tls_version": 0, 00:33:29.066 "enable_ktls": false 00:33:29.066 } 00:33:29.066 }, 00:33:29.066 { 00:33:29.066 "method": "sock_impl_set_options", 00:33:29.066 "params": { 00:33:29.066 "impl_name": "posix", 00:33:29.066 "recv_buf_size": 2097152, 00:33:29.066 "send_buf_size": 2097152, 00:33:29.066 "enable_recv_pipe": true, 00:33:29.066 "enable_quickack": false, 00:33:29.066 "enable_placement_id": 0, 00:33:29.066 "enable_zerocopy_send_server": true, 00:33:29.066 "enable_zerocopy_send_client": false, 00:33:29.066 "zerocopy_threshold": 0, 00:33:29.066 "tls_version": 0, 00:33:29.066 "enable_ktls": false 00:33:29.066 } 00:33:29.066 } 00:33:29.066 ] 00:33:29.066 }, 00:33:29.066 { 00:33:29.066 "subsystem": "vmd", 00:33:29.066 "config": [] 00:33:29.066 }, 00:33:29.066 { 00:33:29.066 "subsystem": "accel", 00:33:29.066 "config": [ 00:33:29.066 { 00:33:29.066 "method": "accel_set_options", 00:33:29.066 "params": { 00:33:29.066 "small_cache_size": 128, 00:33:29.066 "large_cache_size": 16, 00:33:29.066 "task_count": 2048, 00:33:29.066 "sequence_count": 2048, 00:33:29.066 "buf_count": 2048 00:33:29.066 } 00:33:29.066 } 00:33:29.066 ] 00:33:29.066 }, 00:33:29.066 { 00:33:29.066 "subsystem": "bdev", 00:33:29.066 "config": [ 00:33:29.066 { 00:33:29.066 "method": "bdev_set_options", 00:33:29.066 "params": { 00:33:29.066 "bdev_io_pool_size": 65535, 00:33:29.066 "bdev_io_cache_size": 256, 00:33:29.066 "bdev_auto_examine": true, 00:33:29.066 "iobuf_small_cache_size": 128, 00:33:29.066 "iobuf_large_cache_size": 16 00:33:29.066 } 00:33:29.066 }, 00:33:29.066 { 00:33:29.066 "method": "bdev_raid_set_options", 00:33:29.066 "params": { 00:33:29.066 "process_window_size_kb": 1024, 00:33:29.066 "process_max_bandwidth_mb_sec": 0 00:33:29.066 } 00:33:29.066 }, 00:33:29.066 { 00:33:29.066 "method": "bdev_iscsi_set_options", 00:33:29.066 "params": { 00:33:29.066 "timeout_sec": 30 00:33:29.066 } 00:33:29.066 }, 00:33:29.066 { 00:33:29.066 "method": "bdev_nvme_set_options", 00:33:29.066 "params": { 00:33:29.066 "action_on_timeout": "none", 00:33:29.066 "timeout_us": 0, 00:33:29.066 "timeout_admin_us": 0, 00:33:29.066 "keep_alive_timeout_ms": 10000, 00:33:29.066 "arbitration_burst": 0, 00:33:29.066 "low_priority_weight": 0, 00:33:29.066 "medium_priority_weight": 0, 00:33:29.066 "high_priority_weight": 0, 00:33:29.066 "nvme_adminq_poll_period_us": 10000, 00:33:29.066 "nvme_ioq_poll_period_us": 0, 00:33:29.066 "io_queue_requests": 512, 00:33:29.066 "delay_cmd_submit": true, 00:33:29.066 "transport_retry_count": 4, 00:33:29.066 "bdev_retry_count": 3, 00:33:29.066 "transport_ack_timeout": 0, 00:33:29.066 "ctrlr_loss_timeout_sec": 0, 00:33:29.066 "reconnect_delay_sec": 0, 00:33:29.066 "fast_io_fail_timeout_sec": 0, 00:33:29.066 "disable_auto_failback": false, 00:33:29.066 "generate_uuids": false, 00:33:29.066 "transport_tos": 0, 00:33:29.066 "nvme_error_stat": false, 00:33:29.066 "rdma_srq_size": 0, 00:33:29.066 "io_path_stat": false, 00:33:29.066 "allow_accel_sequence": false, 00:33:29.066 "rdma_max_cq_size": 0, 00:33:29.066 "rdma_cm_event_timeout_ms": 0, 00:33:29.066 "dhchap_digests": [ 00:33:29.066 "sha256", 00:33:29.066 "sha384", 00:33:29.066 "sha512" 00:33:29.066 ], 00:33:29.066 "dhchap_dhgroups": [ 00:33:29.066 "null", 00:33:29.066 "ffdhe2048", 00:33:29.066 "ffdhe3072", 00:33:29.066 "ffdhe4096", 00:33:29.066 "ffdhe6144", 00:33:29.066 "ffdhe8192" 00:33:29.066 ] 00:33:29.066 } 00:33:29.066 }, 00:33:29.066 { 00:33:29.066 "method": "bdev_nvme_attach_controller", 00:33:29.066 "params": { 00:33:29.066 "name": "nvme0", 00:33:29.066 "trtype": "TCP", 00:33:29.066 "adrfam": "IPv4", 00:33:29.066 "traddr": "127.0.0.1", 00:33:29.066 "trsvcid": "4420", 00:33:29.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:29.066 "prchk_reftag": false, 00:33:29.066 "prchk_guard": false, 00:33:29.066 "ctrlr_loss_timeout_sec": 0, 00:33:29.066 "reconnect_delay_sec": 0, 00:33:29.066 "fast_io_fail_timeout_sec": 0, 00:33:29.066 "psk": "key0", 00:33:29.066 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:29.066 "hdgst": false, 00:33:29.066 "ddgst": false, 00:33:29.066 "multipath": "multipath" 00:33:29.066 } 00:33:29.066 }, 00:33:29.066 { 00:33:29.066 "method": "bdev_nvme_set_hotplug", 00:33:29.066 "params": { 00:33:29.066 "period_us": 100000, 00:33:29.066 "enable": false 00:33:29.066 } 00:33:29.066 }, 00:33:29.066 { 00:33:29.066 "method": "bdev_wait_for_examine" 00:33:29.066 } 00:33:29.066 ] 00:33:29.066 }, 00:33:29.066 { 00:33:29.066 "subsystem": "nbd", 00:33:29.066 "config": [] 00:33:29.066 } 00:33:29.066 ] 00:33:29.066 }' 00:33:29.066 12:55:09 keyring_file -- keyring/file.sh@115 -- # killprocess 1214388 00:33:29.066 12:55:09 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1214388 ']' 00:33:29.066 12:55:09 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1214388 00:33:29.066 12:55:09 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:29.066 12:55:09 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:29.066 12:55:09 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1214388 00:33:29.066 12:55:09 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:29.066 12:55:09 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:29.066 12:55:09 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1214388' 00:33:29.066 killing process with pid 1214388 00:33:29.066 12:55:09 keyring_file -- common/autotest_common.sh@973 -- # kill 1214388 00:33:29.066 Received shutdown signal, test time was about 1.000000 seconds 00:33:29.066 00:33:29.066 Latency(us) 00:33:29.066 [2024-11-15T11:55:09.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.066 [2024-11-15T11:55:09.410Z] =================================================================================================================== 00:33:29.066 [2024-11-15T11:55:09.410Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:29.066 12:55:09 keyring_file -- common/autotest_common.sh@978 -- # wait 1214388 00:33:29.325 12:55:09 keyring_file -- keyring/file.sh@118 -- # bperfpid=1215852 00:33:29.325 12:55:09 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1215852 /var/tmp/bperf.sock 00:33:29.325 12:55:09 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1215852 ']' 00:33:29.325 12:55:09 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:29.325 12:55:09 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:29.325 12:55:09 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:29.325 12:55:09 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:29.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:29.325 12:55:09 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:29.325 12:55:09 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:33:29.325 "subsystems": [ 00:33:29.325 { 00:33:29.325 "subsystem": "keyring", 00:33:29.325 "config": [ 00:33:29.325 { 00:33:29.325 "method": "keyring_file_add_key", 00:33:29.325 "params": { 00:33:29.325 "name": "key0", 00:33:29.325 "path": "/tmp/tmp.MF57S8NWR1" 00:33:29.325 } 00:33:29.325 }, 00:33:29.325 { 00:33:29.325 "method": "keyring_file_add_key", 00:33:29.325 "params": { 00:33:29.325 "name": "key1", 00:33:29.325 "path": "/tmp/tmp.crdo7BGxvF" 00:33:29.325 } 00:33:29.325 } 00:33:29.325 ] 00:33:29.325 }, 00:33:29.325 { 00:33:29.325 "subsystem": "iobuf", 00:33:29.325 "config": [ 00:33:29.325 { 00:33:29.325 "method": "iobuf_set_options", 00:33:29.325 "params": { 00:33:29.325 "small_pool_count": 8192, 00:33:29.325 "large_pool_count": 1024, 00:33:29.325 "small_bufsize": 8192, 00:33:29.325 "large_bufsize": 135168, 00:33:29.325 "enable_numa": false 00:33:29.325 } 00:33:29.325 } 00:33:29.325 ] 00:33:29.325 }, 00:33:29.325 { 00:33:29.325 "subsystem": "sock", 00:33:29.325 "config": [ 00:33:29.325 { 00:33:29.325 "method": "sock_set_default_impl", 00:33:29.326 "params": { 00:33:29.326 "impl_name": "posix" 00:33:29.326 } 00:33:29.326 }, 00:33:29.326 { 00:33:29.326 "method": "sock_impl_set_options", 00:33:29.326 "params": { 00:33:29.326 "impl_name": "ssl", 00:33:29.326 "recv_buf_size": 4096, 00:33:29.326 "send_buf_size": 4096, 00:33:29.326 "enable_recv_pipe": true, 00:33:29.326 "enable_quickack": false, 00:33:29.326 "enable_placement_id": 0, 00:33:29.326 "enable_zerocopy_send_server": true, 00:33:29.326 "enable_zerocopy_send_client": false, 00:33:29.326 "zerocopy_threshold": 0, 00:33:29.326 "tls_version": 0, 00:33:29.326 "enable_ktls": false 00:33:29.326 } 00:33:29.326 }, 00:33:29.326 { 00:33:29.326 "method": "sock_impl_set_options", 00:33:29.326 "params": { 00:33:29.326 "impl_name": "posix", 00:33:29.326 "recv_buf_size": 2097152, 00:33:29.326 "send_buf_size": 2097152, 00:33:29.326 "enable_recv_pipe": true, 00:33:29.326 "enable_quickack": false, 00:33:29.326 "enable_placement_id": 0, 00:33:29.326 "enable_zerocopy_send_server": true, 00:33:29.326 "enable_zerocopy_send_client": false, 00:33:29.326 "zerocopy_threshold": 0, 00:33:29.326 "tls_version": 0, 00:33:29.326 "enable_ktls": false 00:33:29.326 } 00:33:29.326 } 00:33:29.326 ] 00:33:29.326 }, 00:33:29.326 { 00:33:29.326 "subsystem": "vmd", 00:33:29.326 "config": [] 00:33:29.326 }, 00:33:29.326 { 00:33:29.326 "subsystem": "accel", 00:33:29.326 "config": [ 00:33:29.326 { 00:33:29.326 "method": "accel_set_options", 00:33:29.326 "params": { 00:33:29.326 "small_cache_size": 128, 00:33:29.326 "large_cache_size": 16, 00:33:29.326 "task_count": 2048, 00:33:29.326 "sequence_count": 2048, 00:33:29.326 "buf_count": 2048 00:33:29.326 } 00:33:29.326 } 00:33:29.326 ] 00:33:29.326 }, 00:33:29.326 { 00:33:29.326 "subsystem": "bdev", 00:33:29.326 "config": [ 00:33:29.326 { 00:33:29.326 "method": "bdev_set_options", 00:33:29.326 "params": { 00:33:29.326 "bdev_io_pool_size": 65535, 00:33:29.326 "bdev_io_cache_size": 256, 00:33:29.326 "bdev_auto_examine": true, 00:33:29.326 "iobuf_small_cache_size": 128, 00:33:29.326 "iobuf_large_cache_size": 16 00:33:29.326 } 00:33:29.326 }, 00:33:29.326 { 00:33:29.326 "method": "bdev_raid_set_options", 00:33:29.326 "params": { 00:33:29.326 "process_window_size_kb": 1024, 00:33:29.326 "process_max_bandwidth_mb_sec": 0 00:33:29.326 } 00:33:29.326 }, 00:33:29.326 { 00:33:29.326 "method": "bdev_iscsi_set_options", 00:33:29.326 "params": { 00:33:29.326 "timeout_sec": 30 00:33:29.326 } 00:33:29.326 }, 00:33:29.326 { 00:33:29.326 "method": "bdev_nvme_set_options", 00:33:29.326 "params": { 00:33:29.326 "action_on_timeout": "none", 00:33:29.326 "timeout_us": 0, 00:33:29.326 "timeout_admin_us": 0, 00:33:29.326 "keep_alive_timeout_ms": 10000, 00:33:29.326 "arbitration_burst": 0, 00:33:29.326 "low_priority_weight": 0, 00:33:29.326 "medium_priority_weight": 0, 00:33:29.326 "high_priority_weight": 0, 00:33:29.326 "nvme_adminq_poll_period_us": 10000, 00:33:29.326 "nvme_ioq_poll_period_us": 0, 00:33:29.326 "io_queue_requests": 512, 00:33:29.326 "delay_cmd_submit": true, 00:33:29.326 "transport_retry_count": 4, 00:33:29.326 "bdev_retry_count": 3, 00:33:29.326 "transport_ack_timeout": 0, 00:33:29.326 "ctrlr_loss_timeout_sec": 0, 00:33:29.326 "reconnect_delay_sec": 0, 00:33:29.326 "fast_io_fail_timeout_sec": 0, 00:33:29.326 "disable_auto_failback": false, 00:33:29.326 "generate_uuids": false, 00:33:29.326 "transport_tos": 0, 00:33:29.326 "nvme_error_stat": false, 00:33:29.326 "rdma_srq_size": 0, 00:33:29.326 12:55:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:29.326 "io_path_stat": false, 00:33:29.326 "allow_accel_sequence": false, 00:33:29.326 "rdma_max_cq_size": 0, 00:33:29.326 "rdma_cm_event_timeout_ms": 0, 00:33:29.326 "dhchap_digests": [ 00:33:29.326 "sha256", 00:33:29.326 "sha384", 00:33:29.326 "sha512" 00:33:29.326 ], 00:33:29.326 "dhchap_dhgroups": [ 00:33:29.326 "null", 00:33:29.326 "ffdhe2048", 00:33:29.326 "ffdhe3072", 00:33:29.326 "ffdhe4096", 00:33:29.326 "ffdhe6144", 00:33:29.326 "ffdhe8192" 00:33:29.326 ] 00:33:29.326 } 00:33:29.326 }, 00:33:29.326 { 00:33:29.326 "method": "bdev_nvme_attach_controller", 00:33:29.326 "params": { 00:33:29.326 "name": "nvme0", 00:33:29.326 "trtype": "TCP", 00:33:29.326 "adrfam": "IPv4", 00:33:29.326 "traddr": "127.0.0.1", 00:33:29.326 "trsvcid": "4420", 00:33:29.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:29.326 "prchk_reftag": false, 00:33:29.326 "prchk_guard": false, 00:33:29.326 "ctrlr_loss_timeout_sec": 0, 00:33:29.326 "reconnect_delay_sec": 0, 00:33:29.326 "fast_io_fail_timeout_sec": 0, 00:33:29.326 "psk": "key0", 00:33:29.326 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:29.326 "hdgst": false, 00:33:29.326 "ddgst": false, 00:33:29.326 "multipath": "multipath" 00:33:29.326 } 00:33:29.326 }, 00:33:29.326 { 00:33:29.326 "method": "bdev_nvme_set_hotplug", 00:33:29.326 "params": { 00:33:29.326 "period_us": 100000, 00:33:29.326 "enable": false 00:33:29.326 } 00:33:29.326 }, 00:33:29.326 { 00:33:29.326 "method": "bdev_wait_for_examine" 00:33:29.326 } 00:33:29.326 ] 00:33:29.326 }, 00:33:29.326 { 00:33:29.326 "subsystem": "nbd", 00:33:29.326 "config": [] 00:33:29.326 } 00:33:29.326 ] 00:33:29.326 }' 00:33:29.326 [2024-11-15 12:55:09.548028] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:33:29.326 [2024-11-15 12:55:09.548128] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1215852 ] 00:33:29.326 [2024-11-15 12:55:09.617328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.585 [2024-11-15 12:55:09.680865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.585 [2024-11-15 12:55:09.861666] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:29.843 12:55:09 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:29.843 12:55:09 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:33:29.843 12:55:09 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:33:29.843 12:55:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:29.843 12:55:09 keyring_file -- keyring/file.sh@121 -- # jq length 00:33:30.101 12:55:10 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:30.101 12:55:10 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:33:30.101 12:55:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:30.101 12:55:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:30.101 12:55:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:30.101 12:55:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:30.101 12:55:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:30.359 12:55:10 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:33:30.359 12:55:10 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:33:30.359 12:55:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:30.359 12:55:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:30.359 12:55:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:30.359 12:55:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:30.359 12:55:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:30.617 12:55:10 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:33:30.617 12:55:10 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:33:30.617 12:55:10 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:33:30.617 12:55:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:30.875 12:55:11 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:33:30.875 12:55:11 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:30.875 12:55:11 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.MF57S8NWR1 /tmp/tmp.crdo7BGxvF 00:33:30.875 12:55:11 keyring_file -- keyring/file.sh@20 -- # killprocess 1215852 00:33:30.875 12:55:11 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1215852 ']' 00:33:30.875 12:55:11 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1215852 00:33:30.875 12:55:11 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:30.875 12:55:11 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:30.875 12:55:11 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1215852 00:33:30.875 12:55:11 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:30.875 12:55:11 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:30.875 12:55:11 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1215852' 00:33:30.875 killing process with pid 1215852 00:33:30.875 12:55:11 keyring_file -- common/autotest_common.sh@973 -- # kill 1215852 00:33:30.875 Received shutdown signal, test time was about 1.000000 seconds 00:33:30.875 00:33:30.875 Latency(us) 00:33:30.875 [2024-11-15T11:55:11.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.875 [2024-11-15T11:55:11.219Z] =================================================================================================================== 00:33:30.875 [2024-11-15T11:55:11.219Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:30.875 12:55:11 keyring_file -- common/autotest_common.sh@978 -- # wait 1215852 00:33:31.132 12:55:11 keyring_file -- keyring/file.sh@21 -- # killprocess 1214377 00:33:31.132 12:55:11 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1214377 ']' 00:33:31.132 12:55:11 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1214377 00:33:31.132 12:55:11 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:31.132 12:55:11 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:31.132 12:55:11 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1214377 00:33:31.132 12:55:11 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:31.132 12:55:11 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:31.132 12:55:11 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1214377' 00:33:31.132 killing process with pid 1214377 00:33:31.132 12:55:11 keyring_file -- common/autotest_common.sh@973 -- # kill 1214377 00:33:31.132 12:55:11 keyring_file -- common/autotest_common.sh@978 -- # wait 1214377 00:33:31.697 00:33:31.697 real 0m14.582s 00:33:31.697 user 0m37.214s 00:33:31.697 sys 0m3.175s 00:33:31.697 12:55:11 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:31.697 12:55:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:31.697 ************************************ 00:33:31.697 END TEST keyring_file 00:33:31.697 ************************************ 00:33:31.697 12:55:11 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:33:31.698 12:55:11 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:31.698 12:55:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:31.698 12:55:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:31.698 12:55:11 -- common/autotest_common.sh@10 -- # set +x 00:33:31.698 ************************************ 00:33:31.698 START TEST keyring_linux 00:33:31.698 ************************************ 00:33:31.698 12:55:11 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:31.698 Joined session keyring: 706292971 00:33:31.698 * Looking for test storage... 00:33:31.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:31.698 12:55:11 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:31.698 12:55:11 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:33:31.698 12:55:11 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:31.698 12:55:12 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@345 -- # : 1 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@368 -- # return 0 00:33:31.698 12:55:12 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:31.698 12:55:12 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:31.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.698 --rc genhtml_branch_coverage=1 00:33:31.698 --rc genhtml_function_coverage=1 00:33:31.698 --rc genhtml_legend=1 00:33:31.698 --rc geninfo_all_blocks=1 00:33:31.698 --rc geninfo_unexecuted_blocks=1 00:33:31.698 00:33:31.698 ' 00:33:31.698 12:55:12 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:31.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.698 --rc genhtml_branch_coverage=1 00:33:31.698 --rc genhtml_function_coverage=1 00:33:31.698 --rc genhtml_legend=1 00:33:31.698 --rc geninfo_all_blocks=1 00:33:31.698 --rc geninfo_unexecuted_blocks=1 00:33:31.698 00:33:31.698 ' 00:33:31.698 12:55:12 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:31.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.698 --rc genhtml_branch_coverage=1 00:33:31.698 --rc genhtml_function_coverage=1 00:33:31.698 --rc genhtml_legend=1 00:33:31.698 --rc geninfo_all_blocks=1 00:33:31.698 --rc geninfo_unexecuted_blocks=1 00:33:31.698 00:33:31.698 ' 00:33:31.698 12:55:12 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:31.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.698 --rc genhtml_branch_coverage=1 00:33:31.698 --rc genhtml_function_coverage=1 00:33:31.698 --rc genhtml_legend=1 00:33:31.698 --rc geninfo_all_blocks=1 00:33:31.698 --rc geninfo_unexecuted_blocks=1 00:33:31.698 00:33:31.698 ' 00:33:31.698 12:55:12 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:31.698 12:55:12 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.698 12:55:12 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.698 12:55:12 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.698 12:55:12 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.698 12:55:12 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.698 12:55:12 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:31.698 12:55:12 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:31.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:31.698 12:55:12 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:31.698 12:55:12 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:31.698 12:55:12 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:31.698 12:55:12 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:31.698 12:55:12 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:31.698 12:55:12 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:31.698 12:55:12 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:31.698 12:55:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:31.698 12:55:12 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:31.698 12:55:12 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:31.698 12:55:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:31.698 12:55:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:31.698 12:55:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:31.698 12:55:12 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:31.699 12:55:12 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:33:31.699 12:55:12 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:31.699 12:55:12 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:31.699 12:55:12 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:33:31.699 12:55:12 keyring_linux -- nvmf/common.sh@733 -- # python - 00:33:31.957 12:55:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:31.957 12:55:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:31.957 /tmp/:spdk-test:key0 00:33:31.957 12:55:12 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:31.957 12:55:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:31.957 12:55:12 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:31.957 12:55:12 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:31.957 12:55:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:31.957 12:55:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:31.957 12:55:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:31.957 12:55:12 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:31.957 12:55:12 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:33:31.957 12:55:12 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:31.957 12:55:12 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:33:31.957 12:55:12 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:33:31.957 12:55:12 keyring_linux -- nvmf/common.sh@733 -- # python - 00:33:31.957 12:55:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:31.957 12:55:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:31.957 /tmp/:spdk-test:key1 00:33:31.957 12:55:12 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1216340 00:33:31.957 12:55:12 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:31.957 12:55:12 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1216340 00:33:31.957 12:55:12 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1216340 ']' 00:33:31.957 12:55:12 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.957 12:55:12 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:31.957 12:55:12 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.957 12:55:12 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:31.957 12:55:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:31.957 [2024-11-15 12:55:12.164279] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:33:31.957 [2024-11-15 12:55:12.164366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216340 ] 00:33:31.957 [2024-11-15 12:55:12.230007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.957 [2024-11-15 12:55:12.292491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.523 12:55:12 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.523 12:55:12 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:33:32.523 12:55:12 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:32.523 12:55:12 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.523 12:55:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:32.523 [2024-11-15 12:55:12.571662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.523 null0 00:33:32.523 [2024-11-15 12:55:12.603744] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:32.523 [2024-11-15 12:55:12.604254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:32.523 12:55:12 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.523 12:55:12 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:32.523 462331916 00:33:32.523 12:55:12 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:32.523 571634452 00:33:32.523 12:55:12 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1216346 00:33:32.523 12:55:12 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1216346 /var/tmp/bperf.sock 00:33:32.523 12:55:12 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:32.523 12:55:12 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1216346 ']' 00:33:32.523 12:55:12 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:32.523 12:55:12 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.523 12:55:12 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:32.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:32.523 12:55:12 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.523 12:55:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:32.523 [2024-11-15 12:55:12.674614] Starting SPDK v25.01-pre git sha1 c46ddd981 / DPDK 24.03.0 initialization... 00:33:32.523 [2024-11-15 12:55:12.674702] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216346 ] 00:33:32.523 [2024-11-15 12:55:12.739342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.523 [2024-11-15 12:55:12.798458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.781 12:55:12 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.781 12:55:12 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:33:32.781 12:55:12 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:32.781 12:55:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:33.038 12:55:13 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:33.038 12:55:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:33.297 12:55:13 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:33.297 12:55:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:33.555 [2024-11-15 12:55:13.783301] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:33.555 nvme0n1 00:33:33.555 12:55:13 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:33.555 12:55:13 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:33.555 12:55:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:33.555 12:55:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:33.555 12:55:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.555 12:55:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:33.813 12:55:14 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:33.813 12:55:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:33.813 12:55:14 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:33.813 12:55:14 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:33.813 12:55:14 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.813 12:55:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.813 12:55:14 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:34.380 12:55:14 keyring_linux -- keyring/linux.sh@25 -- # sn=462331916 00:33:34.380 12:55:14 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:34.380 12:55:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:34.380 12:55:14 keyring_linux -- keyring/linux.sh@26 -- # [[ 462331916 == \4\6\2\3\3\1\9\1\6 ]] 00:33:34.380 12:55:14 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 462331916 00:33:34.380 12:55:14 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:34.380 12:55:14 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:34.380 Running I/O for 1 seconds... 00:33:35.321 10081.00 IOPS, 39.38 MiB/s 00:33:35.321 Latency(us) 00:33:35.321 [2024-11-15T11:55:15.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.321 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:35.321 nvme0n1 : 1.01 10080.79 39.38 0.00 0.00 12611.51 7573.05 19126.80 00:33:35.321 [2024-11-15T11:55:15.665Z] =================================================================================================================== 00:33:35.321 [2024-11-15T11:55:15.665Z] Total : 10080.79 39.38 0.00 0.00 12611.51 7573.05 19126.80 00:33:35.321 { 00:33:35.321 "results": [ 00:33:35.321 { 00:33:35.321 "job": "nvme0n1", 00:33:35.321 "core_mask": "0x2", 00:33:35.321 "workload": "randread", 00:33:35.321 "status": "finished", 00:33:35.321 "queue_depth": 128, 00:33:35.321 "io_size": 4096, 00:33:35.321 "runtime": 1.012817, 00:33:35.321 "iops": 10080.794457439004, 00:33:35.321 "mibps": 39.37810334937111, 00:33:35.321 "io_failed": 0, 00:33:35.321 "io_timeout": 0, 00:33:35.321 "avg_latency_us": 12611.51372002757, 00:33:35.321 "min_latency_us": 7573.0488888888885, 00:33:35.321 "max_latency_us": 19126.802962962964 00:33:35.321 } 00:33:35.321 ], 00:33:35.321 "core_count": 1 00:33:35.321 } 00:33:35.321 12:55:15 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:35.321 12:55:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:35.580 12:55:15 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:35.580 12:55:15 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:35.580 12:55:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:35.580 12:55:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:35.580 12:55:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:35.580 12:55:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:35.838 12:55:16 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:35.838 12:55:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:35.838 12:55:16 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:35.838 12:55:16 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:35.838 12:55:16 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:33:35.838 12:55:16 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:35.838 12:55:16 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:35.838 12:55:16 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:35.838 12:55:16 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:35.838 12:55:16 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:35.838 12:55:16 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:35.838 12:55:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:36.097 [2024-11-15 12:55:16.357794] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:36.097 [2024-11-15 12:55:16.358622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea3bc0 (107): Transport endpoint is not connected 00:33:36.097 [2024-11-15 12:55:16.359615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea3bc0 (9): Bad file descriptor 00:33:36.097 [2024-11-15 12:55:16.360614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:36.097 [2024-11-15 12:55:16.360632] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:36.097 [2024-11-15 12:55:16.360655] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:36.097 [2024-11-15 12:55:16.360668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:36.097 request: 00:33:36.097 { 00:33:36.097 "name": "nvme0", 00:33:36.097 "trtype": "tcp", 00:33:36.097 "traddr": "127.0.0.1", 00:33:36.097 "adrfam": "ipv4", 00:33:36.097 "trsvcid": "4420", 00:33:36.097 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:36.097 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:36.097 "prchk_reftag": false, 00:33:36.097 "prchk_guard": false, 00:33:36.097 "hdgst": false, 00:33:36.097 "ddgst": false, 00:33:36.097 "psk": ":spdk-test:key1", 00:33:36.097 "allow_unrecognized_csi": false, 00:33:36.097 "method": "bdev_nvme_attach_controller", 00:33:36.097 "req_id": 1 00:33:36.097 } 00:33:36.097 Got JSON-RPC error response 00:33:36.097 response: 00:33:36.097 { 00:33:36.097 "code": -5, 00:33:36.097 "message": "Input/output error" 00:33:36.097 } 00:33:36.097 12:55:16 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:33:36.097 12:55:16 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:36.097 12:55:16 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:36.097 12:55:16 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:36.097 12:55:16 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:36.097 12:55:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:36.097 12:55:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:36.097 12:55:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:36.097 12:55:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:36.097 12:55:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:36.097 12:55:16 keyring_linux -- keyring/linux.sh@33 -- # sn=462331916 00:33:36.097 12:55:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 462331916 00:33:36.097 1 links removed 00:33:36.097 12:55:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:36.097 12:55:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:36.097 12:55:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:36.097 12:55:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:36.097 12:55:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:36.097 12:55:16 keyring_linux -- keyring/linux.sh@33 -- # sn=571634452 00:33:36.098 12:55:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 571634452 00:33:36.098 1 links removed 00:33:36.098 12:55:16 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1216346 00:33:36.098 12:55:16 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1216346 ']' 00:33:36.098 12:55:16 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1216346 00:33:36.098 12:55:16 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:33:36.098 12:55:16 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.098 12:55:16 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1216346 00:33:36.098 12:55:16 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:36.098 12:55:16 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:36.098 12:55:16 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1216346' 00:33:36.098 killing process with pid 1216346 00:33:36.098 12:55:16 keyring_linux -- common/autotest_common.sh@973 -- # kill 1216346 00:33:36.098 Received shutdown signal, test time was about 1.000000 seconds 00:33:36.098 00:33:36.098 Latency(us) 00:33:36.098 [2024-11-15T11:55:16.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.098 [2024-11-15T11:55:16.442Z] =================================================================================================================== 00:33:36.098 [2024-11-15T11:55:16.442Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:36.098 12:55:16 keyring_linux -- common/autotest_common.sh@978 -- # wait 1216346 00:33:36.356 12:55:16 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1216340 00:33:36.356 12:55:16 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1216340 ']' 00:33:36.356 12:55:16 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1216340 00:33:36.356 12:55:16 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:33:36.356 12:55:16 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.356 12:55:16 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1216340 00:33:36.356 12:55:16 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:36.356 12:55:16 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:36.356 12:55:16 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1216340' 00:33:36.356 killing process with pid 1216340 00:33:36.356 12:55:16 keyring_linux -- common/autotest_common.sh@973 -- # kill 1216340 00:33:36.356 12:55:16 keyring_linux -- common/autotest_common.sh@978 -- # wait 1216340 00:33:36.922 00:33:36.922 real 0m5.150s 00:33:36.922 user 0m10.309s 00:33:36.922 sys 0m1.581s 00:33:36.922 12:55:17 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:36.922 12:55:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:36.922 ************************************ 00:33:36.922 END TEST keyring_linux 00:33:36.922 ************************************ 00:33:36.922 12:55:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:36.922 12:55:17 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:36.922 12:55:17 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:36.922 12:55:17 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:33:36.922 12:55:17 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:36.922 12:55:17 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:36.922 12:55:17 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:36.922 12:55:17 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:36.922 12:55:17 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:36.922 12:55:17 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:36.922 12:55:17 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:36.922 12:55:17 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:36.922 12:55:17 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:36.922 12:55:17 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:36.922 12:55:17 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:36.922 12:55:17 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:36.922 12:55:17 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:36.922 12:55:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:36.922 12:55:17 -- common/autotest_common.sh@10 -- # set +x 00:33:36.922 12:55:17 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:36.922 12:55:17 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:36.922 12:55:17 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:36.922 12:55:17 -- common/autotest_common.sh@10 -- # set +x 00:33:38.824 INFO: APP EXITING 00:33:38.824 INFO: killing all VMs 00:33:38.824 INFO: killing vhost app 00:33:38.824 INFO: EXIT DONE 00:33:39.760 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:33:39.760 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:33:39.760 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:33:39.760 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:33:39.760 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:33:39.760 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:33:39.760 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:33:39.760 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:33:39.760 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:33:39.760 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:33:40.017 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:33:40.017 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:33:40.017 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:33:40.017 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:33:40.017 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:33:40.017 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:33:40.017 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:33:41.491 Cleaning 00:33:41.491 Removing: /var/run/dpdk/spdk0/config 00:33:41.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:41.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:41.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:41.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:41.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:41.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:41.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:41.491 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:41.491 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:41.491 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:41.491 Removing: /var/run/dpdk/spdk1/config 00:33:41.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:41.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:41.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:41.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:41.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:41.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:41.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:41.491 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:41.491 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:41.491 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:41.491 Removing: /var/run/dpdk/spdk2/config 00:33:41.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:41.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:41.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:41.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:41.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:41.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:41.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:41.491 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:41.491 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:41.491 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:41.491 Removing: /var/run/dpdk/spdk3/config 00:33:41.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:41.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:41.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:41.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:41.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:41.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:41.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:41.491 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:41.491 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:41.491 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:41.491 Removing: /var/run/dpdk/spdk4/config 00:33:41.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:41.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:41.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:41.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:41.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:41.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:41.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:41.491 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:41.491 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:41.491 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:41.491 Removing: /dev/shm/bdev_svc_trace.1 00:33:41.491 Removing: /dev/shm/nvmf_trace.0 00:33:41.491 Removing: /dev/shm/spdk_tgt_trace.pid894270 00:33:41.491 Removing: /var/run/dpdk/spdk0 00:33:41.491 Removing: /var/run/dpdk/spdk1 00:33:41.491 Removing: /var/run/dpdk/spdk2 00:33:41.491 Removing: /var/run/dpdk/spdk3 00:33:41.491 Removing: /var/run/dpdk/spdk4 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1000089 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1000628 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1001027 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1001151 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1001288 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1002182 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1003028 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1008867 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1037007 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1039936 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1041115 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1042432 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1042548 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1042623 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1042740 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1043307 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1044620 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1045368 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1045799 00:33:41.491 Removing: /var/run/dpdk/spdk_pid1047416 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1047837 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1048279 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1050672 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1054070 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1054071 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1054072 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1056311 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1061166 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1064549 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1068332 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1069280 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1070376 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1071465 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1074239 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1076825 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1079188 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1083418 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1083426 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1086327 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1086459 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1086595 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1086925 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1086992 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1089758 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1090105 00:33:41.492 Removing: /var/run/dpdk/spdk_pid1092776 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1094700 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1098172 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1102248 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1108753 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1113236 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1113247 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1125616 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1126062 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1126545 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1126955 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1127535 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1127958 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1128368 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1128888 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1131379 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1131539 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1135713 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1136027 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1139505 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1142110 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1148954 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1149422 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1151818 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1152091 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1154721 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1158415 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1160568 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1166954 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1172774 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1173958 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1174612 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1184818 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1187069 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1189082 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1194134 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1194142 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1197043 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1198451 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1199914 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1200704 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1202110 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1203082 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1208950 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1209298 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1209686 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1211251 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1211652 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1211930 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1214377 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1214388 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1215852 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1216340 00:33:41.782 Removing: /var/run/dpdk/spdk_pid1216346 00:33:41.782 Removing: /var/run/dpdk/spdk_pid892585 00:33:41.782 Removing: /var/run/dpdk/spdk_pid893328 00:33:41.782 Removing: /var/run/dpdk/spdk_pid894270 00:33:41.782 Removing: /var/run/dpdk/spdk_pid894613 00:33:41.782 Removing: /var/run/dpdk/spdk_pid895291 00:33:41.782 Removing: /var/run/dpdk/spdk_pid895431 00:33:41.782 Removing: /var/run/dpdk/spdk_pid896149 00:33:41.782 Removing: /var/run/dpdk/spdk_pid896277 00:33:41.782 Removing: /var/run/dpdk/spdk_pid896534 00:33:41.782 Removing: /var/run/dpdk/spdk_pid897743 00:33:41.782 Removing: /var/run/dpdk/spdk_pid898684 00:33:41.782 Removing: /var/run/dpdk/spdk_pid898995 00:33:41.782 Removing: /var/run/dpdk/spdk_pid899196 00:33:41.782 Removing: /var/run/dpdk/spdk_pid899440 00:33:41.782 Removing: /var/run/dpdk/spdk_pid899723 00:33:41.782 Removing: /var/run/dpdk/spdk_pid899879 00:33:41.782 Removing: /var/run/dpdk/spdk_pid900039 00:33:41.782 Removing: /var/run/dpdk/spdk_pid900229 00:33:41.782 Removing: /var/run/dpdk/spdk_pid900541 00:33:41.782 Removing: /var/run/dpdk/spdk_pid903657 00:33:41.782 Removing: /var/run/dpdk/spdk_pid903819 00:33:41.782 Removing: /var/run/dpdk/spdk_pid903981 00:33:41.782 Removing: /var/run/dpdk/spdk_pid903989 00:33:41.782 Removing: /var/run/dpdk/spdk_pid904415 00:33:41.782 Removing: /var/run/dpdk/spdk_pid904429 00:33:41.782 Removing: /var/run/dpdk/spdk_pid904763 00:33:41.782 Removing: /var/run/dpdk/spdk_pid904863 00:33:41.782 Removing: /var/run/dpdk/spdk_pid905028 00:33:41.782 Removing: /var/run/dpdk/spdk_pid905156 00:33:41.782 Removing: /var/run/dpdk/spdk_pid905320 00:33:41.782 Removing: /var/run/dpdk/spdk_pid905331 00:33:41.782 Removing: /var/run/dpdk/spdk_pid905823 00:33:41.782 Removing: /var/run/dpdk/spdk_pid905979 00:33:41.782 Removing: /var/run/dpdk/spdk_pid906193 00:33:41.782 Removing: /var/run/dpdk/spdk_pid908308 00:33:41.782 Removing: /var/run/dpdk/spdk_pid910943 00:33:41.782 Removing: /var/run/dpdk/spdk_pid918074 00:33:41.782 Removing: /var/run/dpdk/spdk_pid918483 00:33:41.782 Removing: /var/run/dpdk/spdk_pid921005 00:33:41.782 Removing: /var/run/dpdk/spdk_pid921283 00:33:41.782 Removing: /var/run/dpdk/spdk_pid923930 00:33:41.782 Removing: /var/run/dpdk/spdk_pid927657 00:33:41.782 Removing: /var/run/dpdk/spdk_pid929843 00:33:41.782 Removing: /var/run/dpdk/spdk_pid936381 00:33:41.782 Removing: /var/run/dpdk/spdk_pid942123 00:33:41.782 Removing: /var/run/dpdk/spdk_pid943402 00:33:41.782 Removing: /var/run/dpdk/spdk_pid944046 00:33:41.782 Removing: /var/run/dpdk/spdk_pid954504 00:33:41.782 Removing: /var/run/dpdk/spdk_pid956920 00:33:41.782 Removing: /var/run/dpdk/spdk_pid984694 00:33:41.782 Removing: /var/run/dpdk/spdk_pid987879 00:33:41.782 Removing: /var/run/dpdk/spdk_pid991833 00:33:41.782 Removing: /var/run/dpdk/spdk_pid996109 00:33:41.782 Removing: /var/run/dpdk/spdk_pid996206 00:33:41.782 Removing: /var/run/dpdk/spdk_pid996768 00:33:42.041 Removing: /var/run/dpdk/spdk_pid997424 00:33:42.041 Removing: /var/run/dpdk/spdk_pid997979 00:33:42.041 Removing: /var/run/dpdk/spdk_pid998374 00:33:42.041 Removing: /var/run/dpdk/spdk_pid998483 00:33:42.041 Removing: /var/run/dpdk/spdk_pid998631 00:33:42.041 Removing: /var/run/dpdk/spdk_pid998768 00:33:42.041 Removing: /var/run/dpdk/spdk_pid998777 00:33:42.041 Removing: /var/run/dpdk/spdk_pid999427 00:33:42.041 Clean 00:33:42.041 12:55:22 -- common/autotest_common.sh@1453 -- # return 0 00:33:42.041 12:55:22 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:42.041 12:55:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:42.041 12:55:22 -- common/autotest_common.sh@10 -- # set +x 00:33:42.041 12:55:22 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:42.041 12:55:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:42.041 12:55:22 -- common/autotest_common.sh@10 -- # set +x 00:33:42.041 12:55:22 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:42.041 12:55:22 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:42.041 12:55:22 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:42.041 12:55:22 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:42.041 12:55:22 -- spdk/autotest.sh@398 -- # hostname 00:33:42.041 12:55:22 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:42.300 geninfo: WARNING: invalid characters removed from testname! 00:34:14.371 12:55:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:17.653 12:55:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:20.935 12:56:00 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:23.463 12:56:03 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:26.745 12:56:06 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:30.027 12:56:09 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:32.557 12:56:12 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:32.557 12:56:12 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:32.557 12:56:12 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:34:32.557 12:56:12 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:32.557 12:56:12 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:32.557 12:56:12 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:32.557 + [[ -n 822111 ]] 00:34:32.557 + sudo kill 822111 00:34:32.568 [Pipeline] } 00:34:32.583 [Pipeline] // stage 00:34:32.588 [Pipeline] } 00:34:32.602 [Pipeline] // timeout 00:34:32.608 [Pipeline] } 00:34:32.622 [Pipeline] // catchError 00:34:32.628 [Pipeline] } 00:34:32.644 [Pipeline] // wrap 00:34:32.651 [Pipeline] } 00:34:32.664 [Pipeline] // catchError 00:34:32.675 [Pipeline] stage 00:34:32.679 [Pipeline] { (Epilogue) 00:34:32.693 [Pipeline] catchError 00:34:32.695 [Pipeline] { 00:34:32.710 [Pipeline] echo 00:34:32.712 Cleanup processes 00:34:32.719 [Pipeline] sh 00:34:33.008 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:33.008 1227037 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:33.025 [Pipeline] sh 00:34:33.312 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:33.312 ++ grep -v 'sudo pgrep' 00:34:33.312 ++ awk '{print $1}' 00:34:33.312 + sudo kill -9 00:34:33.312 + true 00:34:33.325 [Pipeline] sh 00:34:33.615 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:43.625 [Pipeline] sh 00:34:43.915 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:43.915 Artifacts sizes are good 00:34:43.933 [Pipeline] archiveArtifacts 00:34:43.942 Archiving artifacts 00:34:44.093 [Pipeline] sh 00:34:44.376 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:44.390 [Pipeline] cleanWs 00:34:44.400 [WS-CLEANUP] Deleting project workspace... 00:34:44.400 [WS-CLEANUP] Deferred wipeout is used... 00:34:44.408 [WS-CLEANUP] done 00:34:44.410 [Pipeline] } 00:34:44.431 [Pipeline] // catchError 00:34:44.444 [Pipeline] sh 00:34:44.793 + logger -p user.info -t JENKINS-CI 00:34:44.868 [Pipeline] } 00:34:44.881 [Pipeline] // stage 00:34:44.886 [Pipeline] } 00:34:44.899 [Pipeline] // node 00:34:44.903 [Pipeline] End of Pipeline 00:34:44.938 Finished: SUCCESS